Science.gov

Sample records for risk based optimization

  1. PTV-based IMPT optimization incorporating planning risk volumes vs robust optimization

    SciTech Connect

    Liu Wei; Li Xiaoqiang; Zhu, Ron. X.; Mohan, Radhe; Frank, Steven J.; Li Yupeng

    2013-02-15

    Purpose: Robust optimization leads to intensity-modulated proton therapy (IMPT) plans that are less sensitive to uncertainties and superior in terms of organs-at-risk (OARs) sparing, target dose coverage, and homogeneity compared to planning target volume (PTV)-based optimized plans. Robust optimization incorporates setup and range uncertainties, which implicitly adds margins to both targets and OARs and is also able to compensate for perturbations in dose distributions within targets and OARs caused by uncertainties. In contrast, the traditional PTV-based optimization considers only setup uncertainties and adds a margin only to targets but no margins to the OARs. It also ignores range uncertainty. The purpose of this work is to determine if robustly optimized plans are superior to PTV-based plans simply because the latter do not assign margins to OARs during optimization. Methods: The authors retrospectively selected from their institutional database five patients with head and neck (H and N) cancer and one with prostate cancer for this analysis. Using their original images and prescriptions, the authors created new IMPT plans using three methods: PTV-based optimization, optimization based on the PTV and planning risk volumes (PRVs) (i.e., 'PTV+PRV-based optimization'), and robust optimization using the 'worst-case' dose distribution. The PRVs were generated by uniformly expanding OARs by 3 mm for the H and N cases and 5 mm for the prostate case. The dose-volume histograms (DVHs) from the worst-case dose distributions were used to assess and compare plan quality. Families of DVHs for each uncertainty for all structures of interest were plotted along with the nominal DVHs. The width of the 'bands' of DVHs was used to quantify the plan sensitivity to uncertainty. Results: Compared with conventional PTV-based and PTV+PRV-based planning, robust optimization led to a smaller bandwidth for the targets in the face of uncertainties {l_brace}clinical target volume [CTV

  2. Health-risk-based groundwater remediation system optimization through clusterwise linear regression.

    PubMed

    He, L; Huang, G H; Lu, H W

    2008-12-15

    This study develops a health-risk-based groundwater management (HRGM) model. The model incorporates the considerations of environmental quality and human health risks into a general framework. To solve the model, a proxy-based optimization approach is proposed, where a semiparametric statistical method (i.e., clusterwise linear regression) is used to create a set of rapid-response and easy-to-use proxy modules for capturing the relations between remediation policies and the resulting human health risks. Through replacing the simulation and health risk assessment modules with the proxy ones, many orders of magnitude of computational cost can be saved. The model solutions reveal that (i) a long remediation period corresponds to a low total pumping rate, (ii) a stringent risk standard implies a high total pumping rate, and (iii) the human health risk associated with benzene would be significantly reduced if it is regarded as constraints of the model. These implications would assist decision makers in understanding the effects of remediation duration and human-health risk level on optimal remediation policies and in designing a robust groundwater remediation system. Results from postoptimization simulation show that the carcinogenic risk would decrease to satisfy the regulated risk standard under the given remediation policies.

  3. Optimal Temporal Risk Assessment

    PubMed Central

    Balci, Fuat; Freestone, David; Simen, Patrick; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    Time is an essential feature of most decisions, because the reward earned from decisions frequently depends on the temporal statistics of the environment (e.g., on whether decisions must be made under deadlines). Accordingly, evolution appears to have favored a mechanism that predicts intervals in the seconds to minutes range with high accuracy on average, but significant variability from trial to trial. Importantly, the subjective sense of time that results is sufficiently imprecise that maximizing rewards in decision-making can require substantial behavioral adjustments (e.g., accumulating less evidence for a decision in order to beat a deadline). Reward maximization in many daily decisions therefore requires optimal temporal risk assessment. Here, we review the temporal decision-making literature, conduct secondary analyses of relevant published datasets, and analyze the results of a new experiment. The paper is organized in three parts. In the first part, we review literature and analyze existing data suggesting that animals take account of their inherent behavioral variability (their “endogenous timing uncertainty”) in temporal decision-making. In the second part, we review literature that quantitatively demonstrates nearly optimal temporal risk assessment with sub-second and supra-second intervals using perceptual tasks (with humans and mice) and motor timing tasks (with humans). We supplement this section with original research that tested human and rat performance on a task that requires finding the optimal balance between two time-dependent quantities for reward maximization. This optimal balance in turn depends on the level of timing uncertainty. Corroborating the reviewed literature, humans and rats exhibited nearly optimal temporal risk assessment in this task. In the third section, we discuss the role of timing uncertainty in reward maximization in two-choice perceptual decision-making tasks and review literature that implicates timing uncertainty

  4. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  5. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  6. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem

    PubMed Central

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  7. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.

    PubMed

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  8. An optimization-based approach for facility energy management with uncertainties, and, Power portfolio optimization in deregulated electricity markets with risk management

    NASA Astrophysics Data System (ADS)

    Xu, Jun

    Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the

  9. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  10. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  11. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    . The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.

  12. Risk based optimization of the frequency of EDG on-line maintenance at Hope Creek

    SciTech Connect

    Knoll, A.; Samanta, P.K.; Vesely, W.E.

    1996-09-01

    This paper presents a study to optimize the frequency of on-line maintenance of the emergency diesel generators at Hope Creek. This study was directed towards identifying, analyzing, and modifying maintenance planning and scheduling practices to assure the high availability of emergency diesel generators. Input from application of a recently developed reliability model, from considerations of probabilistic safety assessment, plant-specific experience, insights from personnel involved in EDG maintenance, and other practical issues were used to define a maintenance schedule that balances its beneficial and adverse impacts. Conclusions resulted in feasible recommendations to optimize and reduce the frequency of diesel on-line maintenance, allowing additional resources to better maintain other equipment important to safety.

  13. Optimization of the fractionated irradiation scheme considering physical doses to tumor and organ at risk based on dose–volume histograms

    SciTech Connect

    Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki

    2015-11-15

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.

  14. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Chueng, Kar-Ming

    2008-01-01

    The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.

  15. Adaptation for Planting and Irrigation Decisions to Changing Monsoon Regime in Northeast India: Risk-based Hydro-economic Optimization

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Cai, X.

    2013-12-01

    Delay in onset of Indian summer monsoon becomes increasingly frequent. Delayed monsoon and occasional monsoon failures seriously affect agricultural production in the northeast as well as other parts of India. In the Vaishali district of the Bihar State, Monsoon rainfall is very skewed and erratic, often concentrating in shorter durations. Farmers in Vaishali reported that delayed Monsoon affected paddy planting and, consequently delayed cropping cycle, putting crops under the risks of 'terminal heat.' Canal system in the district does not function due to lack of maintenance; irrigation relies almost entirely on groundwater. Many small farmers choose not to irrigate when monsoon onset is delayed due to high diesel price, leading to reduced production or even crop failure. Some farmers adapt to delayed onset of Monsoon by planting short-duration rice, which gives the flexibility for planting the next season crops. Other sporadic autonomous adaptation activities were observed as well, with various levels of success. Adaptation recommendations and effective policy interventions are much needed. To explore robust options to adapt to the changing Monsoon regime, we build a stochastic programming model to optimize revenues of farmer groups categorized by landholding size, subject to stochastic Monsoon onset and rainfall amount. Imperfect probabilistic long-range forecast is used to inform the model onset and rainfall amount probabilities; the 'skill' of the forecasting is measured using probabilities of correctly predicting events in the past derived through hindcasting. Crop production functions are determined using self-calibrating Positive Mathematical Programming approach. The stochastic programming model aims to emulate decision-making behaviors of representative farmer agents through making choices in adaptation, including crop mix, planting dates, irrigation, and use of weather information. A set of technological and policy intervention scenarios are tested

  16. Risk Assessment: Evidence Base

    NASA Technical Reports Server (NTRS)

    Johnson-Throop, Kathy A.

    2007-01-01

    Human systems PRA (Probabilistic Risk Assessment: a) Provides quantitative measures of probability, consequence, and uncertainty; and b) Communicates risk and informs decision-making. Human health risks rated highest in ISS PRA are based on 1997 assessment of clinical events in analog operational settings. Much work remains to analyze remaining human health risks identified in Bioastronautics Roadmap.

  17. RNA based evolutionary optimization.

    PubMed

    Schuster, P

    1993-12-01

    . Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that 'neutral paths' connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space.(ABSTRACT TRUNCATED AT 400 WORDS)

  18. Optimal security investments and extreme risk.

    PubMed

    Mohtadi, Hamid; Agiwal, Swati

    2012-08-01

    In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance.

  19. Optimal security investments and extreme risk.

    PubMed

    Mohtadi, Hamid; Agiwal, Swati

    2012-08-01

    In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. PMID:22694261

  20. Search-based optimization.

    PubMed

    Wheeler, Ward C

    2003-08-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. PMID:14531408

  1. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  2. Risk-Informed Decisions Optimization in Inspection and Maintenance

    SciTech Connect

    Robertas Alzbutas

    2002-07-01

    The Risk-Informed Approach (RIA) used to support decisions related to inspection and maintenance program is considered. The use of risk-informed methods can help focus the adequate in-service inspections and control on the more important locations of complex dynamic systems. The focus is set on the highest risk measured as conditional core damage frequency, which is produced by the frequencies of degradation and final failure at different locations combined with the conditional failure consequence probability. The probabilities of different degradation states per year and consequences are estimated quantitatively. The investigation of inspection and maintenance process is presented as the combination of deterministic and probabilistic analysis based on general risk-informed model, which includes the inspection and maintenance program features. Such RIA allows an optimization of inspection program while maintaining probabilistic and fundamental deterministic safety requirements. The failure statistics analysis is used as well as the evaluation of reliability of inspections. The assumptions regarding the effectiveness of the inspection methods are based on a classification of the accessibility of the welds during the inspection and on the different techniques used for inspection. The probability of defect detection is assumed to depend on the parameters either through logarithmic or logit transformation. As example the modeling of the pipe systems inspection process is analyzed. The means to reduce a number of inspection sites and the cumulative radiation exposure to the NPP inspection personnel with a reduction of overall risk is presented together with used and developed software. The developed software can perform and administrate all the risk evaluations and ensure the possibilities to compare different options and perform sensitivity analysis. The approaches to define an acceptable level of risk are discussed. These approaches with appropriate software in

  3. Risk perceptions, optimism, and natural hazards.

    PubMed

    Smith, V Kerry

    2008-12-01

    This article uses the panel survey developed for the Health and Retirement Study to evaluate whether Hurricane Andrew in 1992 altered longevity expectations of respondents who lived in Dade County, Florida, the location experiencing the majority of about 20 billion dollars of damage. Longevity expectations have been used as a proxy measure for both individual subjective risk assessments and dispositional optimism. The panel structure allows comparison of those respondents' longevity assessments when the timing of their survey responses bracket Andrew with those of individuals where it does not. After controlling for health effects, the results indicate a significant reduction in longevity expectations due to the information respondents appear to have associated with the storm.

  4. Testing an optimized community-based human immunodeficiency virus (HIV) risk reduction and antiretroviral adherence intervention for HIV-infected injection drug users.

    PubMed

    Copenhaver, Michael M; Lee, I-Ching; Margolin, Arthur; Bruce, Robert D; Altice, Frederick L

    2011-01-01

    The authors conducted a preliminary study of the 4-session Holistic Health for HIV (3H+), which was adapted from a 12-session evidence-based risk reduction and antiretroviral adherence intervention. Improvements were found in the behavioral skills required to properly adhere to HIV medication regimens. Enhancements were found in all measured aspects of sex-risk reduction outcomes, including HIV knowledge, motivation to reduce sex-risk behavior, behavioral skills related to engaging in reduced sexual risk, and reduced risk behavior. Improvements in drug use outcomes included enhancements in risk reduction skills as well as reduced heroin and cocaine use. Intervention effects also showed durability from post-intervention to the follow-up assessment point. Females responded particularly well in terms of improvements in risk reduction skills and risk behavior. This study suggests that an evidence-based behavioral intervention may be successfully adapted for use in community-based clinical settings where HIV-infected drug users can be more efficiently reached.

  5. Displacement based multilevel structural optimization

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1995-01-01

    Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate

  6. Towards Risk Based Design for NASA's Missions

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Barrientos, Francesca; Meshkat, Leila

    2004-01-01

    This paper describes the concept of Risk Based Design in the context of NASA s low volume, high cost missions. The concept of accounting for risk in the design lifecycle has been discussed and proposed under several research topics, including reliability, risk analysis, optimization, uncertainty, decision-based design, and robust design. This work aims to identify and develop methods to enable and automate a means to characterize and optimize risk, and use risk as a tradeable resource to make robust and reliable decisions, in the context of the uncertain and ambiguous stage of early conceptual design. This paper first presents a survey of the related topics explored in the design research community as they relate to risk based design. Then, a summary of the topics from the NASA-led Risk Colloquium is presented, followed by current efforts within NASA to account for risk in early design. Finally, a list of "risk elements", identified for early-phase conceptual design at NASA, is presented. The purpose is to lay the foundation and develop a roadmap for future work and collaborations for research to eliminate and mitigate these risk elements in early phase design.

  7. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming

    2008-01-01

    This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.

  8. Differential effects of trait anger on optimism and risk behaviour.

    PubMed

    Pietruska, Karin; Armony, Jorge L

    2013-01-01

    It has been proposed that angry people exhibit optimistic risk estimates about future events and, consequently, are biased towards making risk-seeking choices. The goal of this study was to directly test the hypothesised effect of trait anger on optimism and risk-taking behaviour. One hundred healthy volunteers completed questionnaires about personality traits, optimism and risk behaviour. In addition their risk tendency was assessed with the Balloon Analogue Risk Task (BART), which provides an online measure of risk behaviour. Our results partly confirmed the relation between trait anger and outcome expectations of future life events, but suggest that this optimism does not necessarily translate into actual risk-seeking behaviour. PMID:22780446

  9. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  10. Optimal trading from minimizing the period of bankruptcy risk

    NASA Astrophysics Data System (ADS)

    Liehr, S.; Pawelzik, K.

    2001-04-01

    Assuming that financial markets behave similar to random walk processes we derive a trading strategy with variable investment which is based on the equivalence of the period of bankruptcy risk and the risk to profit ratio. We define a state dependent predictability measure which can be attributed to the deterministic and stochastic components of the price dynamics. The influence of predictability variations and especially of short term inefficiency structures on the optimal amount of investment is analyzed in the given context and a method for adaptation of a trading system to the proposed objective function is presented. Finally we show the performance of our trading strategy on the DAX and S&P 500 as examples for real world data using different types of prediction models in comparison.

  11. Research on optimization-based design

    NASA Astrophysics Data System (ADS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-04-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  12. Polynomial optimization techniques for activity scheduling. Optimization based prototype scheduler

    NASA Technical Reports Server (NTRS)

    Reddy, Surender

    1991-01-01

    Polynomial optimization techniques for activity scheduling (optimization based prototype scheduler) are presented in the form of the viewgraphs. The following subject areas are covered: agenda; need and viability of polynomial time techniques for SNC (Space Network Control); an intrinsic characteristic of SN scheduling problem; expected characteristics of the schedule; optimization based scheduling approach; single resource algorithms; decomposition of multiple resource problems; prototype capabilities, characteristics, and test results; computational characteristics; some features of prototyped algorithms; and some related GSFC references.

  13. New algorithms for optimal reduction of technical risks

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-06-01

    The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.

  14. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  15. A Novel Hybridization of Applied Mathematical, Operations Research and Risk-based Methods to Achieve an Optimal Solution to a Challenging Subsurface Contamination Problem

    NASA Astrophysics Data System (ADS)

    Johnson, K. D.; Pinder, G. F.

    2013-12-01

    The objective of the project is the creation of a new, computationally based, approach to the collection, evaluation and use of data for the purpose of determining optimal strategies for investment in the solution of remediation of contaminant source areas and similar environmental problems. The research focuses on the use of existing mathematical tools assembled in a unique fashion. The area of application of this new capability is optimal (least-cost) groundwater contamination source identification; we wish to identify the physical environments wherein it may be cost-prohibitive to identify a contaminant source, the optimal strategy to protect the environment from additional insult and formulate strategies for cost-effective environmental restoration. The computational underpinnings of the proposed approach encompass the integration into a unique of several known applied-mathematical tools. The resulting tool integration achieves the following: 1) simulate groundwater flow and contaminant transport under uncertainty, that is when the physical parameters such as hydraulic conductivity are known to be described by a random field; 2) define such a random field from available field data or be able to provide insight into the sampling strategy needed to create such a field; 3) incorporate subjective information, such as the opinions of experts on the importance of factors such as locations of waste landfills; 4) optimize a search strategy for finding a potential source location and to optimally combine field information with model results to provide the best possible representation of the mean contaminant field and its geostatistics. Our approach combines in a symbiotic manner methodologies found in numerical simulation, random field analysis, Kalman filtering, fuzzy set theory and search theory. Testing the algorithm for this stage of the work, we will focus on fabricated field situations wherein we can a priori specify the degree of uncertainty associated with the

  16. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  17. Risk-optimized proton therapy to minimize radiogenic second cancers

    NASA Astrophysics Data System (ADS)

    Rechner, Laura A.; Eley, John G.; Howell, Rebecca M.; Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D.

    2015-05-01

    Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimizes the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment planning utilized a combination of a commercial treatment planning system and an in-house risk-optimization algorithm. When normal-tissue dose constraints were incorporated in treatment planning, the risk model that incorporated the effects of fractionation, initiation, inactivation, repopulation and promotion selected a combination of anterior and lateral beams, which lowered the relative risk by 21% for the bladder and 30% for the rectum compared to the lateral-opposed beam arrangement. Other results were found for other risk models.

  18. Risk-optimized proton therapy to minimize radiogenic second cancers

    PubMed Central

    Rechner, Laura A.; Eley, John G.; Howell, Rebecca M.; Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D.

    2015-01-01

    Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimize the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment planning utilized a combination of a commercial treatment planning system and an in-house risk-optimization algorithm. When normal-tissue dose constraints were incorporated in treatment planning, the risk model that incorporated the effects of fractionation, initiation, inactivation, and repopulation selected a combination of anterior and lateral beams, which lowered the relative risk by 21% for the bladder and 30% for the rectum compared to the lateral-opposed beam arrangement. Other results were found for other risk models. PMID:25919133

  19. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem. PMID:24235281

  20. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  1. Risk spreading, connectivity, and optimal reserve spacing.

    PubMed

    Blowes, Shane A; Connolly, Sean R

    2012-01-01

    Two important processes determining the dynamics of spatially structured populations are dispersal and the spatial covariance of demographic fluctuations. Spatially explicit approaches to conservation, such as reserve networks, must consider the tension between these two processes and reach a balance between distances near enough to maintain connectivity, but far enough to benefit from risk spreading. Here, we model this trade-off. We show how two measures of metapopulation persistence depend on the shape of the dispersal kernel and the shape of the distance decay in demographic covariance, and we consider the implications of this trade-off for reserve spacing. The relative rates of distance decay in dispersal and demographic covariance determine whether the long-run metapopulation growth rate, and quasi-extinction risk, peak for adjacent patches or intermediately spaced patches; two local maxima in metapopulation persistence are also possible. When dispersal itself fluctuates over time, the trade-off changes. Temporal variation in mean distance that propagules are dispersed (i.e., propagule advection) decreases metapopulation persistence and decreases the likelihood that persistence will peak for adjacent patches. Conversely, variation in diffusion (the extent of random spread around mean dispersal) increases metapopulation persistence overall and causes it to peak at shorter inter-patch distances. Thus, failure to consider temporal variation in dispersal processes increases the risk that reserve spacings will fail to meet the objective of ensuring metapopulation persistence. This study identifies two phenomena that receive relatively little attention in empirical work on reserve spacing, but that can qualitatively change the effectiveness of reserve spacing strategies: (1) the functional form of the distance decay in covariance among patch-specific demographic rates and (2) temporal variation in the shape of the dispersal kernel. The sensitivity of metapopulation

  2. Cancer risk assessment: Optimizing human health through linear dose-response models.

    PubMed

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2015-07-01

    This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose-responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10(-4) risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10(-4) risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose-response model predictions be correct. The dose yielding the 10(-4) cancer risk therefore yields the optimized toxicologically based "regulatory sweet spot". PMID:25916915

  3. Risk based management of piping systems

    SciTech Connect

    Conley, M.J.; Aller, J.E.; Tallin, A.; Weber, B.J.

    1996-07-01

    The API Piping Inspection Code is the first such Code to require classification of piping based on the consequences of failure, and to use this classification to influence inspection activity. Since this Code was published, progress has been made in the development of tools to improve on this approach by determining not only the consequences of failure, but also the likelihood of failure. ``Risk`` is defined as the product of the consequence and the likelihood. Measuring risk provides the means to formally manage risk by matching the inspection effort (costs) to the benefits of reduced risk. Using such a cost/benefit analysis allows the optimization of inspection budgets while meeting societal demands for reduction of the risk associated with process plant piping. This paper presents an overview of the tools developed to measure risk, and the methods to determine the effects of past and future inspections on the level of risk. The methodology is being developed as an industry-sponsored project under the direction of an API committee. The intent is to develop an API Recommended Practice that will be linked to In-Service Inspection Standards and the emerging Fitness for Service procedures. Actual studies using a similar approach have shown that a very high percentage of the risk due to piping in an operating facility is associated with relatively few pieces of piping. This permits inspection efforts to be focused on those piping systems that will result in the greatest risk reduction.

  4. Risk-based decisionmaking (Panel)

    SciTech Connect

    Smith, T.H.

    1995-12-31

    By means of a panel discussion and extensive audience interaction, explore the current challenges and progress to date in applying risk considerations to decisionmaking related to low-level waste. This topic is especially timely because of the proposed legislation pertaining to risk-based decisionmaking and because of the increased emphasis placed on radiological performance assessments of low-level waste disposal.

  5. Optimal linear and nonlinear feature extraction based on the minimization of the increased risk of misclassification. [Bayes theorem - statistical analysis/data processing

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.

    1974-01-01

    General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.

  6. The Integration of LNT and Hormesis for Cancer Risk Assessment Optimizes Public Health Protection.

    PubMed

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2016-03-01

    This paper proposes a new cancer risk assessment strategy and methodology that optimizes population-based responses by yielding the lowest disease/tumor incidence across the entire dose continuum. The authors argue that the optimization can be achieved by integrating two seemingly conflicting models; i.e., the linear no-threshold (LNT) and hormetic dose-response models. The integration would yield the optimized response at a risk of 10 with the LNT model. The integrative functionality of the LNT and hormetic dose response models provides an improved estimation of tumor incidence through model uncertainty analysis and major reductions in cancer incidence via hormetic model estimates. This novel approach to cancer risk assessment offers significant improvements over current risk assessment approaches by revealing a regulatory sweet spot that maximizes public health benefits while incorporating practical approaches for model validation. PMID:26808876

  7. A risk-based sensor placement methodology.

    PubMed

    Lee, Ronald W; Kulesz, James J

    2008-10-30

    A risk-based sensor placement methodology is proposed to solve the problem of optimal location of sensors to protect population against the exposure to, and effects of, known and/or postulated chemical, biological, and/or radiological threats. Risk is calculated as a quantitative value representing population at risk from exposure at standard exposure levels. Historical meteorological data are used to characterize weather conditions as the frequency of wind speed and direction pairs. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate risk values. Sensor locations are determined via an iterative dynamic programming algorithm whereby threats detected by sensors placed in prior iterations are removed from consideration in subsequent iterations. In addition to the risk-based placement algorithm, the proposed methodology provides a quantification of the marginal utility of each additional sensor. This is the fraction of the total risk accounted for by placement of the sensor. Thus, the criteria for halting the iterative process can be the number of sensors available, a threshold marginal utility value, and/or a minimum cumulative utility achieved with all sensors.

  8. A Risk-Based Sensor Placement Methodology

    SciTech Connect

    Lee, Ronald W; Kulesz, James J

    2008-01-01

    A risk-based sensor placement methodology is proposed to solve the problem of optimal location of sensors or detectors to protect population against the exposure to and effects of known and/or postulated chemical, biological, and/or radiological threats. Risk is calculated as a quantitative value representing population at risk from exposure against standard exposure levels. Historical meteorological data are used to characterize weather conditions as the frequency of wind speed and direction pairs. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate risk values. Sensor locations are determined via an iterative dynamic programming algorithm whereby threats captured or detected by sensors placed in prior stages are removed from consideration in subsequent stages. In addition to the risk-based placement algorithm, the proposed methodology provides a quantification of the marginal utility of each additional sensor or detector. Thus, the criterion for halting the iterative process can be the number of detectors available, a threshold marginal utility value, or the cumulative detection of a minimum factor of the total risk value represented by all threats. The methodology quantifies the effect of threat reduction measures, such as reduced probability of one or more threats due to administrative and/or engineering controls.

  9. Reduction of radiation risks in patients undergoing some X-ray examinations by using optimal projections: A Monte Carlo program-based mathematical calculation.

    PubMed

    Chaparian, A; Kanani, A; Baghbanian, M

    2014-01-01

    The objectives of this paper were calculation and comparison of the effective doses, the risks of exposure-induced cancer, and dose reduction in the gonads for male and female patients in different projections of some X-ray examinations. Radiographies of lumbar spine [in the eight projections of anteroposterior (AP), posteroanterior (PA), right lateral (RLAT), left lateral (LLAT), right anterior-posterior oblique (RAO), left anterior-posterior oblique (LAO), right posterior-anterior oblique (RPO), and left posterior-anterior oblique (LPO)], abdomen (in the two projections of AP and PA), and pelvis (in the two projections of AP and PA) were investigated. A solid-state dosimeter was used for the measuring of the entrance skin exposure. A Monte Carlo program was used for calculation of effective doses, the risks of radiation-induced cancer, and doses to the gonads related to the different projections. Results of this study showed that PA projection of abdomen, lumbar spine, and pelvis radiographies caused 50%-57% lower effective doses than AP projection and 50%-60% reduction in radiation risks. Also use of LAO projection of lumbar spine X-ray examination caused 53% lower effective dose than RPO projection and 56% and 63% reduction in radiation risk for male and female, respectively, and RAO projection caused 28% lower effective dose than LPO projection and 52% and 39% reduction in radiation risk for males and females, respectively. About dose reduction in the gonads, using of the PA position rather than AP in the radiographies of the abdomen, lumbar spine, and pelvis can result in reduction of the ovaries doses in women, 38%, 31%, and 25%, respectively and reduction of the testicles doses in males, 76%, 86%, and 94%, respectively. Also for oblique projections of lumbar spine X-ray examination, with employment of LAO rather than RPO and also RAO rather than LPO, demonstrated 22% and 13% reductions to the ovaries doses and 66% and 54% reductions in the testicles doses

  10. Requirements based system risk modeling

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Cornford, Steven; Feather, Martin

    2004-01-01

    The problem that we address in this paper is assessing the expected degree of success of the system or mission based on the degree to which each requirement is satisfied and the relative weight of the requirements. We assume a complete list of the requirements, the relevant risk elements and their probability of occurrence and the quantified effect of the risk elements on the requirements. In order to assess the degree to which each requirement is satisfied, we need to determine the effect of the various risk elements on the requirement.

  11. Optimal Combination Treatment and Vascular Outcomes in Recent Ischemic Stroke Patients by Premorbid Risk Level

    PubMed Central

    Park, Jong-Ho; Ovbiagele, Bruce

    2015-01-01

    Background Optimal combination of secondary stroke prevention treatment including antihypertensives, antithrombotic agents, and lipid modifiers is associated with reduced recurrent vascular risk including stroke. It is unclear whether optimal combination treatment has a differential impact on stroke patients based on level of vascular risk. Methods We analyzed a clinical trial dataset comprising 3680 recent non-cardioembolic stroke patients aged ≥35 years and followed for 2 years. Patients were categorized by appropriateness level 0 to III depending on the number of the drugs prescribed divided by the number of drugs potentially indicated for each patient (0=none of the indicated medications prescribed and III=all indicated medications prescribed [optimal combination treatment]). High-risk was defined as having a history of stroke or coronary heart disease (CHD) prior to the index stroke event. Independent associations of medication appropriateness level with a major vascular event (stroke, CHD, or vascular death), ischemic stroke, and all-cause death were analyzed. Results Compared with level 0, for major vascular events, the HR of level III in the low-risk group was 0.51 (95% CI: 0.20–1.28) and 0.32 (0.14–0.70) in the high-risk group; for stroke, the HR of level III in the low-risk group was 0.54 (0.16–1.77) and 0.25 (0.08–0.85) in the high-risk group; and for all-cause death, the HR of level III in the low-risk group was 0.66 (0.09–5.00) and 0.22 (0.06–0.78) in the high-risk group. Conclusion Optimal combination treatment is related to a significantly lower risk of future vascular events and death among high-risk patients after a recent non-cardioembolic stroke. PMID:26044963

  12. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  13. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  14. Optimal CO2 mitigation under damage risk valuation

    NASA Astrophysics Data System (ADS)

    Crost, Benjamin; Traeger, Christian P.

    2014-07-01

    The current generation has to set mitigation policy under uncertainty about the economic consequences of climate change. This uncertainty governs both the level of damages for a given level of warming, and the steepness of the increase in damage per warming degree. Our model of climate and the economy is a stochastic version of a model employed in assessing the US Social Cost of Carbon (DICE). We compute the optimal carbon taxes and CO2 abatement levels that maximize welfare from economic consumption over time under different risk states. In accordance with recent developments in finance, we separate preferences about time and risk to improve the model's calibration of welfare to observed market interest. We show that introducing the modern asset pricing framework doubles optimal abatement and carbon taxation. Uncertainty over the level of damages at a given temperature increase can result in a slight increase of optimal emissions as compared to using expected damages. In contrast, uncertainty governing the steepness of the damage increase in temperature results in a substantially higher level of optimal mitigation.

  15. Optimization of multi-constrained structures based on optimality criteria

    NASA Technical Reports Server (NTRS)

    Rizzi, P.

    1976-01-01

    A weight-reduction algorithm is developed for the optimal design of structures subject to several multibehavioral inequality constraints. The structural weight is considered to depend linearly on the design variables. The algorithm incorporates a simple recursion formula derived from the Kuhn-Tucker necessary conditions for optimality, associated with a procedure to delete nonactive constraints based on the Gauss-Seidel iterative method for linear systems. A number of example problems is studied, including typical truss structures and simplified wings subject to static loads and with constraints imposed on stresses and displacements. For one of the latter structures, constraints on the fundamental natural frequency and flutter speed are also imposed. The results obtained show that the method is fast, efficient, and general when compared to other competing techniques. Extensions to the generality of the method to include equality constraints and nonlinear merit functions is discussed.

  16. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  17. MORT (Management Oversight and Risk Tree) based risk management

    SciTech Connect

    Briscoe, G.J.

    1990-02-01

    Risk Management is the optimization of safety programs. This requires a formal systems approach to hazards identification, risk quantification, and resource allocation/risk acceptance as opposed to case-by-case decisions. The Management Oversight and Risk Tree (MORT) has gained wide acceptance as a comprehensive formal systems approach covering all aspects of risk management. It (MORT) is a comprehensive analytical procedure that provides a disciplined method for determining the causes and contributing factors of major accidents. Alternatively, it serves as a tool to evaluate the quality of an existing safety system. While similar in many respects to fault tree analysis, MORT is more generalized and presents over 1500 specific elements of an ideal ''universal'' management program for optimizing occupational safety.

  18. Risk Matrix Integrating Risk Attitudes Based on Utility Theory.

    PubMed

    Ruan, Xin; Yin, Zhiyi; Frangopol, Dan M

    2015-08-01

    Recent studies indicate that absence of the consideration of risk attitudes of decisionmakers in the risk matrix establishment process has become a major limitation. In order to evaluate risk in a more comprehensive manner, an approach to establish risk matrices that integrates risk attitudes based on utility theory is proposed. There are three main steps within this approach: (1) describing risk attitudes of decisionmakers by utility functions, (2) bridging the gap between utility functions and the risk matrix by utility indifference curves, and (3) discretizing utility indifference curves. A complete risk matrix establishment process based on practical investigations is introduced. This process utilizes decisionmakers' answers to questionnaires to formulate required boundary values for risk matrix establishment and utility functions that effectively quantify their respective risk attitudes.

  19. Risk analysis of heat recovery steam generator with semi quantitative risk based inspection API 581

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin

    2016-04-01

    Corrosion is a major problem that most often occurs in the power plant. Heat recovery steam generator (HRSG) is an equipment that has a high risk to the power plant. The impact of corrosion damage causing HRSG power plant stops operating. Furthermore, it could be threaten the safety of employees. The Risk Based Inspection (RBI) guidelines by the American Petroleum Institute (API) 58 has been used to risk analysis in the HRSG 1. By using this methodology, the risk that caused by unexpected failure as a function of the probability and consequence of failure can be estimated. This paper presented a case study relating to the risk analysis in the HRSG, starting with a summary of the basic principles and procedures of risk assessment and applying corrosion RBI for process industries. The risk level of each HRSG equipment were analyzed: HP superheater has a medium high risk (4C), HP evaporator has a medium-high risk (4C), and the HP economizer has a medium risk (3C). The results of the risk assessment using semi-quantitative method of standard API 581 based on the existing equipment at medium risk. In the fact, there is no critical problem in the equipment components. Damage mechanisms were prominent throughout the equipment is thinning mechanism. The evaluation of the risk approach was done with the aim of reducing risk by optimizing the risk assessment activities.

  20. Optimal dividends in the Brownian motion risk model with interest

    NASA Astrophysics Data System (ADS)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  1. Inspection-Repair based Availability Optimization of Distribution Systems using Teaching Learning based Optimization

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2016-09-01

    This paper describes a technique for optimizing inspection and repair based availability of distribution systems. Optimum duration between two inspections has been obtained for each feeder section with respect to cost function and subject to satisfaction of availability at each load point. Teaching learning based optimization has been used for availability optimization. The developed algorithm has been implemented on radial and meshed distribution systems. The result obtained has been compared with those obtained with differential evolution.

  2. Functional approximation and optimal specification of the mechanical risk index.

    PubMed

    Kaiser, Mark J; Pulsipher, Allan G

    2005-10-01

    The mechanical risk index (MRI) is a numerical measure that quantifies the complexity of drilling a well. The purpose of this article is to examine the role of the component factors of the MRI and its structural and parametric assumptions. A meta-modeling methodology is applied to derive functional expressions of the MRI, and it is shown that the MRI can be approximated in terms of a linear functional. The variation between the MRI measure and its functional specification is determined empirically, and for a reasonable design space, the functional specification is shown to a good approximating representation. A drilling risk index is introduced to quantify the uncertainty in the time and cost associated with drilling a well. A general methodology is outlined to create an optimal MRI specification. PMID:16297233

  3. Structural optimization based on internal energy distribution

    NASA Astrophysics Data System (ADS)

    Öman, Michael; Nilsson, Larsgunnar

    2013-04-01

    Structural optimization is a valuable tool to improve the performance of products, but it is in general expensive to perform due to the required extensive number of function evaluations. Therefore, an approximate method based on the internal energy distribution, which only requires a small number of function evaluations, is presented here. By this method, structural optimization can be enabled already in the initial steps in the design of new products when fast, but not necessarily precise, results are often desired. However, the accuracy of the approximate solution depends on the structural behaviour. The internal energy based optimization method is here validated for three structures, but it is believed to be applicable to any structure subjected to a single load where the functions considered are related to the displacement of the loaded area and/or the material thicknesses of the structural parts.

  4. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  5. Estimating vegetation dryness to optimize fire risk assessment with spot vegetation satellite data in savanna ecosystems

    NASA Astrophysics Data System (ADS)

    Verbesselt, J.; Somers, B.; Lhermitte, S.; van Aardt, J.; Jonckheere, I.; Coppin, P.

    2005-10-01

    The lack of information on vegetation dryness prior to the use of fire as a management tool often leads to a significant deterioration of the savanna ecosystem. This paper therefore evaluated the capacity of SPOT VEGETATION time-series to monitor the vegetation dryness (i.e., vegetation moisture content per vegetation amount) in order to optimize fire risk assessment in the savanna ecosystem of Kruger National Park in South Africa. The integrated Relative Vegetation Index approach (iRVI) to quantify the amount of herbaceous biomass at the end of the rain season and the Accumulated Relative Normalized Difference vegetation index decrement (ARND) related to vegetation moisture content were selected. The iRVI and ARND related to vegetation amount and moisture content, respectively, were combined in order to monitor vegetation dryness and optimize fire risk assessment in the savanna ecosystems. In situ fire activity data was used to evaluate the significance of the iRVI and ARND to monitor vegetation dryness for fire risk assessment. Results from the binary logistic regression analysis confirmed that the assessment of fire risk was optimized by integration of both the vegetation quantity (iRVI) and vegetation moisture content (ARND) as statistically significant explanatory variables. Consequently, the integrated use of both iRVI and ARND to monitor vegetation dryness provides a more suitable tool for fire management and suppression compared to other traditional satellite-based fire risk assessment methods, only related to vegetation moisture content.

  6. Surrogate-based analysis and optimization

    NASA Astrophysics Data System (ADS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Rajkumar; Kevin Tucker, P.

    2005-01-01

    A major challenge to the successful full-scale development of modern aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  7. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  8. Risk-based system refinement

    SciTech Connect

    Winter, V.L.; Berg, R.S.; Dalton, L.J.

    1998-06-01

    When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.

  9. Assessment of Medical Risks and Optimization of their Management using Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Fitts, Mary A.; Madurai, Siram; Butler, Doug; Kerstman, Eric; Risin, Diana

    2008-01-01

    The Integrated Medical Model (IMM) Project is a software-based technique that will identify and quantify the medical needs and health risks of exploration crew members during space flight and evaluate the effectiveness of potential mitigation strategies. The IMM Project employs an evidence-based approach that will quantify probability and consequences of defined in-flight medical risks, mitigation strategies, and tactics to optimize crew member health. Using stochastic techniques, the IMM will ultimately inform decision makers at both programmatic and institutional levels and will enable objective assessment of crew health and optimization of mission success using data from relevant cohort populations and from the astronaut population. The objectives of the project include: 1) identification and documentation of conditions that may occur during exploration missions (Baseline Medical Conditions List [BMCL), 2) assessment of the likelihood of conditions in the BMCL occurring during exploration missions (incidence rate), 3) determination of the risk associated with these conditions and quantify in terms of end states (Loss of Crew, Loss of Mission, Evacuation), 4) optimization of in-flight hardware mass, volume, power, bandwidth and cost for a given level of risk or uncertainty, and .. validation of the methodologies used.

  10. Local, Optimization-based Simplicial Mesh Smoothing

    1999-12-09

    OPT-MS is a C software package for the improvement and untangling of simplicial meshes (triangles in 2D, tetrahedra in 3D). Overall mesh quality is improved by iterating over the mesh vertices and adjusting their position to optimize some measure of mesh quality, such as element angle or aspect ratio. Several solution techniques (including Laplacian smoothing, "Smart" Laplacian smoothing, optimization-based smoothing and several combinations thereof) and objective functions (for example, element angle, sin (angle), and aspectmore » ratio) are available to the user for both two and three-dimensional meshes. If the mesh contains invalid elements (those with negative area) a different optimization algorithm for mesh untangling is provided.« less

  11. A risk-reduction approach for optimal software release time determination with the delay incurred cost

    NASA Astrophysics Data System (ADS)

    Peng, Rui; Li, Yan-Fu; Zhang, Jun-Guang; Li, Xiang

    2015-07-01

    Most existing research on software release time determination assumes that parameters of the software reliability model (SRM) are deterministic and the reliability estimate is accurate. In practice, however, there exists a risk that the reliability requirement cannot be guaranteed due to the parameter uncertainties in the SRM, and such risk can be as high as 50% when the mean value is used. It is necessary for the software project managers to reduce the risk to a lower level by delaying the software release, which inevitably increases the software testing costs. In order to incorporate the managers' preferences over these two factors, a decision model based on multi-attribute utility theory (MAUT) is developed for the determination of optimal risk-reduction release time.

  12. Optimal perfusion during cardiopulmonary bypass: an evidence-based approach.

    PubMed

    Murphy, Glenn S; Hessel, Eugene A; Groom, Robert C

    2009-05-01

    In this review, we summarize the best available evidence to guide the conduct of adult cardiopulmonary bypass (CPB) to achieve "optimal" perfusion. At the present time, there is considerable controversy relating to appropriate management of physiologic variables during CPB. Low-risk patients tolerate mean arterial blood pressures of 50-60 mm Hg without apparent complications, although limited data suggest that higher-risk patients may benefit from mean arterial blood pressures >70 mm Hg. The optimal hematocrit on CPB has not been defined, with large data-based investigations demonstrating that both severe hemodilution and transfusion of packed red blood cells increase the risk of adverse postoperative outcomes. Oxygen delivery is determined by the pump flow rate and the arterial oxygen content and organ injury may be prevented during more severe hemodilutional anemia by increasing pump flow rates. Furthermore, the optimal temperature during CPB likely varies with physiologic goals, and recent data suggest that aggressive rewarming practices may contribute to neurologic injury. The design of components of the CPB circuit may also influence tissue perfusion and outcomes. Although there are theoretical advantages to centrifugal blood pumps over roller pumps, it has been difficult to demonstrate that the use of centrifugal pumps improves clinical outcomes. Heparin coating of the CPB circuit may attenuate inflammatory and coagulation pathways, but has not been clearly demonstrated to reduce major morbidity and mortality. Similarly, no distinct clinical benefits have been observed when open venous reservoirs have been compared to closed systems. In conclusion, there are currently limited data upon which to confidently make strong recommendations regarding how to conduct optimal CPB. There is a critical need for randomized trials assessing clinically significant outcomes, particularly in high-risk patients. PMID:19372313

  13. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  14. Optimal network solution for proactive risk assessment and emergency response

    NASA Astrophysics Data System (ADS)

    Cai, Tianxing

    Coupled with the continuous development in the field industrial operation management, the requirement for operation optimization in large scale manufacturing network has provoked more interest in the research field of engineering. Compared with the traditional way to take the remedial measure after the occurrence of the emergency event or abnormal situation, the current operation control calls for more proactive risk assessment to set up early warning system and comprehensive emergency response planning. Among all the industries, chemical industry and energy industry have higher opportunity to face with the abnormal and emergency situations due to their own industry characterization. Therefore the purpose of the study is to develop methodologies to give aid in emergency response planning and proactive risk assessment in the above two industries. The efficacy of the developed methodologies is demonstrated via two industrial real problems. The first case is to handle energy network dispatch optimization under emergency of local energy shortage under extreme conditions such as earthquake, tsunami, and hurricane, which may cause local areas to suffer from delayed rescues, widespread power outages, tremendous economic losses, and even public safety threats. In such urgent events of local energy shortage, agile energy dispatching through an effective energy transportation network, targeting the minimum energy recovery time, should be a top priority. The second case is a scheduling methodology to coordinate multiple chemical plants' start-ups in order to minimize regional air quality impacts under extreme meteorological conditions. The objective is to reschedule multi-plant start-up sequence to achieve the minimum sum of delay time compared to the expected start-up time of each plant. All these approaches can provide quantitative decision support for multiple stake holders, including government and environment agencies, chemical industry, energy industry and local

  15. EUD-based biological optimization for carbon ion therapy

    SciTech Connect

    Brüningk, Sarah C. Kamp, Florian; Wilkens, Jan J.

    2015-11-15

    therapy, the optimization by biological objective functions resulted in slightly superior treatment plans in terms of final EUD for the organs at risk (OARs) compared to voxel-based optimization approaches. This observation was made independent of the underlying objective function metric. An absolute gain in OAR sparing was observed for quadratic objective functions, whereas intersecting DVHs were found for logistic approaches. Even for considerable under- or overestimations of the used effect- or dose–volume parameters during the optimization, treatment plans were obtained that were of similar quality as the results of a voxel-based optimization. Conclusions: EUD-based optimization with either of the presented concepts can successfully be applied to treatment plan optimization. This makes EUE-based optimization for carbon ion therapy a useful tool to optimize more specifically in the sense of biological outcome while voxel-to-voxel variations of the biological effectiveness are still properly accounted for. This may be advantageous in terms of computational cost during treatment plan optimization but also enables a straight forward comparison of different fractionation schemes or treatment modalities.

  16. Base distance optimization for SQUID gradiometers

    SciTech Connect

    Garachtchenko, A.; Matlashov, A.; Kraus, R.

    1998-12-31

    The measurement of magnetic fields generated by weak nearby biomagnetic sources is affected by ambient noise generated by distant sources both internal and external to the subject under study. External ambient noise results from sources with numerous origins, many of which are unpredictable in nature. Internal noise sources are biomagnetic in nature and result from muscle activity (such as the heart, eye blinks, respiration, etc.), pulsation associated with blood flow, surgical implants, etc. Any magnetic noise will interfere with measurements of magnetic sources of interest, such as magnetoencephalography (MEG), in various ways. One of the most effective methods of reducing the magnetic noise measured by the SQUID sensor is to use properly designed superconducting gradiometers. Here, the authors optimized the baseline length of SQUID-based symmetric axial gradiometers using computer simulation. The signal-to-noise ratio (SNR) was used as the optimization criteria. They found that in most cases the optimal baseline is not equal to the depth of the primary source, rather it has a more complex dependence on the gradiometer balance and the ambient magnetic noise. They studied both first and second order gradiometers in simulated shielded environments and only second order gradiometers in a simulated unshielded environment. The noise source was simulated as a distant dipolar source for the shielded cases. They present optimal gradiometer baseline lengths for the various simulated situations below.

  17. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  18. Risks and Risk-Based Regulation in Higher Education Institutions

    ERIC Educational Resources Information Center

    Huber, Christian

    2009-01-01

    Risk-based regulation is a relatively new mode of governance. Not only does it offer a way of controlling institutions from the outside but it also provides the possibility of making an organisation's achievements visible/visualisable. This paper comments on a list of possible risks that higher education institutions have to face. In a second…

  19. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2016-03-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  20. Discriminative motif optimization based on perceptron training

    PubMed Central

    Patel, Ronak Y.; Stormo, Gary D.

    2014-01-01

    Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152

  1. Optimized curvelet-based empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Wu, Renjie; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    The recent years has seen immense improvement in the development of signal processing based on Curvelet transform. The Curvelet transform provide a new multi-resolution representation. The frame elements of Curvelets exhibit higher direction sensitivity and anisotropic than the Wavelets, multi-Wavelets, steerable pyramids, and so on. These features are based on the anisotropic notion of scaling. In practical instances, time series signals processing problem is often encountered. To solve this problem, the time-frequency analysis based methods are studied. However, the time-frequency analysis cannot always be trusted. Many of the new methods were proposed. The Empirical Mode Decomposition (EMD) is one of them, and widely used. The EMD aims to decompose into their building blocks functions that are the superposition of a reasonably small number of components, well separated in the time-frequency plane. And each component can be viewed as locally approximately harmonic. However, it cannot solve the problem of directionality of high-dimensional. A reallocated method of Curvelet transform (optimized Curvelet-based EMD) is proposed in this paper. We introduce a definition for a class of functions that can be viewed as a superposition of a reasonably small number of approximately harmonic components by optimized Curvelet family. We analyze this algorithm and demonstrate its results on data. The experimental results prove the effectiveness of our method.

  2. Multiobjective optimization-based design of wearable electrocardiogram monitoring systems.

    PubMed

    Martinez-Tabares, F J; Jaramillo-Garzón, J A; Castellanos-Dominguez, G

    2014-01-01

    Nowadays, the use of Wearable User Interfaces has been extensively growing in medical monitoring applications. However, production and manufacture of prototypes without automation tools may lead to non viable results since it is often common to find an optimization problem where several variables are in conflict with each other. Thus, it is necessary to design a strategy for balancing the variables and constraints, systematizing the design in order to reduce the risks that are present when it is exclusively guided by the intuition of the developer. This paper proposes a framework for designing wearable ECG monitoring systems using multi-objective optimization. The main contributions of this work are the model to automate the design process, including a mathematical expression relating the principal variables that make up the criteria of functionality and wearability. We also introduce a novel yardstick for deciding the location of electrodes, based on reducing interference from ECG by maximizing the electrode-skin contact.

  3. Optimal halftoning for network-based imaging

    NASA Astrophysics Data System (ADS)

    Ostromoukhov, Victor

    2000-12-01

    In this contribution, we introduce a multiple depth progressive representation for network-based still and moving images. A simple quantization algorithm associated with this representation provides optimal image quality. By optimum, we mean the best possible visual quality for a given value of information under real life constraints such as physical, psychological , or legal constraints. A special variant of the algorithm, multi-depth coherent error diffusion, addresses a specific problem of temporal coherence between frames in moving images. The output produced with our algorithm is visually pleasant because its Fourier spectrum is close to the 'blue noise'.

  4. GPU-based ultrafast IMRT plan optimization

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Gu, Xuejun; Choi, Dongju; Majumdar, Amitava; Zheng, Ziyi; Mueller, Klaus; Jiang, Steve B.

    2009-11-01

    The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy.

  5. Study of a risk-based piping inspection guideline system.

    PubMed

    Tien, Shiaw-Wen; Hwang, Wen-Tsung; Tsai, Chih-Hung

    2007-02-01

    A risk-based inspection system and a piping inspection guideline model were developed in this study. The research procedure consists of two parts--the building of a risk-based inspection model for piping and the construction of a risk-based piping inspection guideline model. Field visits at the plant were conducted to develop the risk-based inspection and strategic analysis system. A knowledge-based model had been built in accordance with international standards and local government regulations, and the rational unified process was applied for reducing the discrepancy in the development of the models. The models had been designed to analyze damage factors, damage models, and potential damage positions of piping in the petrochemical plants. The purpose of this study was to provide inspection-related personnel with the optimal planning tools for piping inspections, hence, to enable effective predictions of potential piping risks and to enhance the better degree of safety in plant operations that the petrochemical industries can be expected to achieve. A risk analysis was conducted on the piping system of a petrochemical plant. The outcome indicated that most of the risks resulted from a small number of pipelines.

  6. Nuclear insurance risk assessment using risk-based methodology

    SciTech Connect

    Wendland, W.G. )

    1992-01-01

    This paper presents American Nuclear Insurers' (ANI's) and Mutual Atomic Energy Liability Underwriters' (MAELU's) process and experience for conducting nuclear insurance risk assessments using a risk-based methodology. The process is primarily qualitative and uses traditional insurance risk assessment methods and an approach developed under the auspices of the American Society of Mechanical Engineers (ASME) in which ANI/MAELU is an active sponsor. This process assists ANI's technical resources in identifying where to look for insurance risk in an industry in which insurance exposure tends to be dynamic and nonactuarial. The process is an evolving one that also seeks to minimize the impact on insureds while maintaining a mutually agreeable risk tolerance.

  7. Human behavior-based particle swarm optimization.

    PubMed

    Liu, Hao; Xu, Gang; Ding, Gui-Yan; Sun, Yu-Bo

    2014-01-01

    Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c 1 and c 2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.

  8. Cytogenetic bases for risk inference

    SciTech Connect

    Bender, M A

    1980-01-01

    Various enviromental pollutants are suspected of being capable of causing cancers or genetic defects even at low levels of exposure. In order to estimate risk from exposure to these pollutants, it would be useful to have some indicator of exposure. It is suggested that chromosomes are ideally suited for this purpose. Through the phenonema of chromosome aberrations and sister chromatid exchanges (SCE), chromosomes respond to virtually all carcinogens and mutagens. Aberrations and SCE are discussed in the context of their use as indicators of increased risk to health by chemical pollutants. (ACR)

  9. Doing our best: optimization and the management of risk.

    PubMed

    Ben-Haim, Yakov

    2012-08-01

    Tools and concepts of optimization are widespread in decision-making, design, and planning. There is a moral imperative to "do our best." Optimization underlies theories in physics and biology, and economic theories often presume that economic agents are optimizers. We argue that in decisions under uncertainty, what should be optimized is robustness rather than performance. We discuss the equity premium puzzle from financial economics, and explain that the puzzle can be resolved by using the strategy of satisficing rather than optimizing. We discuss design of critical technological infrastructure, showing that satisficing of performance requirements--rather than optimizing them--is a preferable design concept. We explore the need for disaster recovery capability and its methodological dilemma. The disparate domains--economics and engineering--illuminate different aspects of the challenge of uncertainty and of the significance of robust-satisficing. PMID:22551012

  10. Doing our best: optimization and the management of risk.

    PubMed

    Ben-Haim, Yakov

    2012-08-01

    Tools and concepts of optimization are widespread in decision-making, design, and planning. There is a moral imperative to "do our best." Optimization underlies theories in physics and biology, and economic theories often presume that economic agents are optimizers. We argue that in decisions under uncertainty, what should be optimized is robustness rather than performance. We discuss the equity premium puzzle from financial economics, and explain that the puzzle can be resolved by using the strategy of satisficing rather than optimizing. We discuss design of critical technological infrastructure, showing that satisficing of performance requirements--rather than optimizing them--is a preferable design concept. We explore the need for disaster recovery capability and its methodological dilemma. The disparate domains--economics and engineering--illuminate different aspects of the challenge of uncertainty and of the significance of robust-satisficing.

  11. Risk based ASME Code requirements

    SciTech Connect

    Gore, B.F.; Vo, T.V.; Balkey, K.R.

    1992-09-01

    The objective of this ASME Research Task Force is to develop and to apply a methodology for incorporating quantitative risk analysis techniques into the definition of in-service inspection (ISI) programs for a wide range of industrial applications. An additional objective, directed towards the field of nuclear power generation, is ultimately to develop a recommendation for comprehensive revisions to the ISI requirements of Section XI of the ASME Boiler and Pressure Vessel Code. This will require development of a firm technical basis for such requirements, which does not presently exist. Several years of additional research will be required before this can be accomplished. A general methodology suitable for application to any industry has been defined and published. It has recently been refined and further developed during application to the field of nuclear power generation. In the nuclear application probabilistic risk assessment (PRA) techniques and information have been incorporated. With additional analysis, PRA information is used to determine the consequence of a component rupture (increased reactor core damage probability). A procedure has also been recommended for using the resulting quantified risk estimates to determine target component rupture probability values to be maintained by inspection activities. Structural risk and reliability analysis (SRRA) calculations are then used to determine characteristics which an inspection strategy must posess in order to maintain component rupture probabilities below target values. The methodology, results of example applications, and plans for future work are discussed.

  12. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers.

  13. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  14. CFD based draft tube hydraulic design optimization

    NASA Astrophysics Data System (ADS)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  15. Risk based inspection for atmospheric storage tank

    NASA Astrophysics Data System (ADS)

    Nugroho, Agus; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin

    2016-04-01

    Corrosion is an attack that occurs on a metallic material as a result of environment's reaction.Thus, it causes atmospheric storage tank's leakage, material loss, environmental pollution, equipment failure and affects the age of process equipment then finally financial damage. Corrosion risk measurement becomesa vital part of Asset Management at the plant for operating any aging asset.This paper provides six case studies dealing with high speed diesel atmospheric storage tank parts at a power plant. A summary of the basic principles and procedures of corrosion risk analysis and RBI applicable to the Process Industries were discussed prior to the study. Semi quantitative method based onAPI 58I Base-Resource Document was employed. The risk associated with corrosion on the equipment in terms of its likelihood and its consequences were discussed. The corrosion risk analysis outcome used to formulate Risk Based Inspection (RBI) method that should be a part of the atmospheric storage tank operation at the plant. RBI gives more concern to inspection resources which are mostly on `High Risk' and `Medium Risk' criteria and less on `Low Risk' shell. Risk categories of the evaluated equipment were illustrated through case study analysis outcome.

  16. Risk based limits for Operational Safety Requirements

    SciTech Connect

    Cappucci, A.J. Jr.

    1993-01-18

    OSR limits are designed to protect the assumptions made in the facility safety analysis in order to preserve the safety envelope during facility operation. Normally, limits are set based on ``worst case conditions`` without regard to the likelihood (frequency) of a credible event occurring. In special cases where the accident analyses are based on ``time at risk`` arguments, it may be desirable to control the time at which the facility is at risk. A methodology has been developed to use OSR limits to control the source terms and the times these source terms would be available, thus controlling the acceptable risk to a nuclear process facility. The methodology defines a new term ``gram-days``. This term represents the area under a source term (inventory) vs time curve which represents the risk to the facility. Using the concept of gram-days (normalized to one year) allows the use of an accounting scheme to control the risk under the inventory vs time curve. The methodology results in at least three OSR limits: (1) control of the maximum inventory or source term, (2) control of the maximum gram-days for the period based on a source term weighted average, and (3) control of the maximum gram-days at the individual source term levels. Basing OSR limits on risk based safety analysis is feasible, and a basis for development of risk based limits is defensible. However, monitoring inventories and the frequencies required to maintain facility operation within the safety envelope may be complex and time consuming.

  17. Temporal variation of optimal UV exposure time over Korea: risks and benefits of surface UV radiation

    NASA Astrophysics Data System (ADS)

    Lee, Y. G.; Koo, J. H.

    2015-12-01

    Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) over Korea during 2004-2012. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied in estimating the optimal UV exposure time. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.

  18. Risk-Based Comparison of Carbon Capture Technologies

    SciTech Connect

    Engel, David W.; Dalton, Angela C.; Dale, Crystal; Jones, Edward

    2013-05-01

    In this paper, we describe an integrated probabilistic risk assessment methodological framework and a decision-support tool suite for implementing systematic comparisons of competing carbon capture technologies. Culminating from a collaborative effort among national laboratories under the Carbon Capture Simulation Initiative (CCSI), the risk assessment framework and the decision-support tool suite encapsulate three interconnected probabilistic modeling and simulation components. The technology readiness level (TRL) assessment component identifies specific scientific and engineering targets required by each readiness level and applies probabilistic estimation techniques to calculate the likelihood of graded as well as nonlinear advancement in technology maturity. The technical risk assessment component focuses on identifying and quantifying risk contributors, especially stochastic distributions for significant risk contributors, performing scenario-based risk analysis, and integrating with carbon capture process model simulations and optimization. The financial risk component estimates the long-term return on investment based on energy retail pricing, production cost, operating and power replacement cost, plan construction and retrofit expenses, and potential tax relief, expressed probabilistically as the net present value distributions over various forecast horizons.

  19. Optimizing Assurance: The Risk Regulation System in Relationships

    ERIC Educational Resources Information Center

    Murray, Sandra L.; Holmes, John G.; Collins, Nancy L.

    2006-01-01

    A model of risk regulation is proposed to explain how people balance the goal of seeking closeness to a romantic partner against the opposing goal of minimizing the likelihood and pain of rejection. The central premise is that confidence in a partner's positive regard and caring allows people to risk seeking dependence and connectedness. The risk…

  20. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  1. Unrealistic Optimism, Sex, and Risk Perception of Type 2 Diabetes Onset: Implications for Education Programs

    PubMed Central

    Sealey-Potts, Claudia

    2015-01-01

    This study examined links among unrealistic optimism, sex, and risk perception of type 2 diabetes onset in college students. Participants included 660 college students who consented to complete a questionnaire. The results showed significant differences between students who perceived that they were at risk for type 2 diabetes onset and those who thought their peers were the ones at risk. A higher prevalence of participants thought their peers were the ones at risk for type 2 diabetes. Women were more likely than men to report a higher risk perception, indicating that their peers were at lower risk for diabetes onset. PMID:25717271

  2. Risk-Based Object Oriented Testing

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert

    2000-01-01

    Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.

  3. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  4. Biologically based, quantitative risk assessment of neurotoxicants.

    PubMed

    Slikker, W; Crump, K S; Andersen, M E; Bellinger, D

    1996-01-01

    The need for biologically based, quantitative risk assessment procedures for noncancer endpoints such as neurotoxicity has been discussed in reports by the United States Congress (Office of Technology Assessment, OTA), National Research Council (NRC), and a federal coordinating council. According to OTA, current attention and resources allocated to health risk assessment research are inadequate and not commensurate with its impact on public health and the economy. Methods to include continuous rather than dichotomous data for neurotoxicity endpoints, biomarkers of exposure and effects, and pharmacokinetic and mechanistic data have been proposed for neurotoxicity risk assessment but require further review and validation before acceptance. The purpose of this symposium was to examine procedures to enhance the risk assessment process for neurotoxicants and to discuss techniques to make the process more quantitative. Accordingly, a review of the currently used safety factor risk assessment approach for neurotoxicants is provided along with specific examples of how this process may be enhanced with the use of the benchmark dose approach. The importance of including physiologically based pharmacokinetic data in the risk assessment process and specific examples of this approach is presented for neurotoxicants. The role of biomarkers of exposure and effect and mechanistic information in the risk assessment process are also addressed. Finally, quantitative approaches with the use of continuous neurotoxicity data are demonstrated and the outcomes compared to those generated by currently used risk assessment procedures. PMID:8838636

  5. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  6. Optimization of agricultural field workability predictions for improved risk management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Risks introduced by weather variability are key considerations in agricultural production. The sensitivity of agriculture to weather variability is of special concern in the face of climate change. In particular, the availability of workable days is an important consideration in agricultural practic...

  7. Optimal Predator Risk Assessment by the Sonar-Jamming Arctiine Moth Bertholdia trigona

    PubMed Central

    Corcoran, Aaron J.; Wagner, Ryan D.; Conner, William E.

    2013-01-01

    Nearly all animals face a tradeoff between seeking food and mates and avoiding predation. Optimal escape theory holds that an animal confronted with a predator should only flee when benefits of flight (increased survival) outweigh the costs (energetic costs, lost foraging time, etc.). We propose a model for prey risk assessment based on the predator's stage of attack. Risk level should increase rapidly from when the predator detects the prey to when it commits to the attack. We tested this hypothesis using a predator – the echolocating bat – whose active biosonar reveals its stage of attack. We used a prey defense – clicking used for sonar jamming by the tiger moth Bertholdia trigona– that can be readily studied in the field and laboratory and is enacted simultaneously with evasive flight. We predicted that prey employ defenses soon after being detected and targeted, and that prey defensive thresholds discriminate between legitimate predatory threats and false threats where a nearby prey is attacked. Laboratory and field experiments using playbacks of ultrasound signals and naturally behaving bats, respectively, confirmed our predictions. Moths clicked soon after bats detected and targeted them. Also, B. trigona clicking thresholds closely matched predicted optimal thresholds for discriminating legitimate and false predator threats for bats using search and approach phase echolocation – the period when bats are searching for and assessing prey. To our knowledge, this is the first quantitative study to correlate the sensory stimuli that trigger defensive behaviors with measurements of signals provided by predators during natural attacks in the field. We propose theoretical models for explaining prey risk assessment depending on the availability of cues that reveal a predator's stage of attack. PMID:23671686

  8. Optimal predator risk assessment by the sonar-jamming arctiine moth Bertholdia trigona.

    PubMed

    Corcoran, Aaron J; Wagner, Ryan D; Conner, William E

    2013-01-01

    Nearly all animals face a tradeoff between seeking food and mates and avoiding predation. Optimal escape theory holds that an animal confronted with a predator should only flee when benefits of flight (increased survival) outweigh the costs (energetic costs, lost foraging time, etc.). We propose a model for prey risk assessment based on the predator's stage of attack. Risk level should increase rapidly from when the predator detects the prey to when it commits to the attack. We tested this hypothesis using a predator--the echolocating bat--whose active biosonar reveals its stage of attack. We used a prey defense--clicking used for sonar jamming by the tiger moth Bertholdia trigona--that can be readily studied in the field and laboratory and is enacted simultaneously with evasive flight. We predicted that prey employ defenses soon after being detected and targeted, and that prey defensive thresholds discriminate between legitimate predatory threats and false threats where a nearby prey is attacked. Laboratory and field experiments using playbacks of ultrasound signals and naturally behaving bats, respectively, confirmed our predictions. Moths clicked soon after bats detected and targeted them. Also, B. trigona clicking thresholds closely matched predicted optimal thresholds for discriminating legitimate and false predator threats for bats using search and approach phase echolocation--the period when bats are searching for and assessing prey. To our knowledge, this is the first quantitative study to correlate the sensory stimuli that trigger defensive behaviors with measurements of signals provided by predators during natural attacks in the field. We propose theoretical models for explaining prey risk assessment depending on the availability of cues that reveal a predator's stage of attack.

  9. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem.

    PubMed

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.

  10. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

    PubMed Central

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005

  11. Optimal separable bases and molecular collisions

    SciTech Connect

    Poirier, L W

    1997-12-01

    A new methodology is proposed for the efficient determination of Green`s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR`s) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H{sub 2} {yields} H{sub 2} + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes.

  12. Optimal guidance law for cooperative attack of multiple missiles based on optimal control theory

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Xia, Yuanqing

    2012-08-01

    This article considers the problem of optimal guidance laws for cooperative attack of multiple missiles based on the optimal control theory. New guidance laws are presented such that multiple missiles attack a single target simultaneously. Simulation results show the effectiveness of the proposed algorithms.

  13. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  14. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; VanDalsem, William (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  15. Defining a region of optimization based on engine usage data

    SciTech Connect

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  16. Shape optimization for contact problems based on isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Horn, Benjamin; Ulbrich, Stefan

    2016-08-01

    We consider the shape optimization for mechanical connectors. To avoid the gap between the representation in CAD systems and the finite element simulation used by mathematical optimization, we choose an isogeometric approach for the solution of the contact problem within the optimization method. This leads to a shape optimization problem governed by an elastic contact problem. We handle the contact conditions using the mortar method and solve the resulting contact problem with a semismooth Newton method. The optimization problem is nonconvex and nonsmooth due to the contact conditions. To reduce the number of simulations, we use a derivative based optimization method. With the adjoint approach the design derivatives can be calculated efficiently. The resulting optimization problem is solved with a modified Bundle Trust Region algorithm.

  17. Technical fixes and Climate Change: Optimizing for Risks and Consequences

    SciTech Connect

    Rasch, Philip J.

    2010-09-16

    Scientists and society in general are becoming increasingly concerned about the risks of climate change from the emission of greenhouse gases [IPCC, 2007]. Yet emissions continue to increase [Raupach et al., 2007], and reductions soon enough to avoid large and undesirable impacts requires a near revolutionary global transformation of energy and transportation systems [Hoffert et al., 1998]. The size of the transformation and lack of an effective societal response has motivated some to explore other quite controversial strategies to mitigate some of the planetary consequences of these emissions.

  18. Optimal trajectories based on linear equations

    NASA Technical Reports Server (NTRS)

    Carter, Thomas E.

    1990-01-01

    The Principal results of a recent theory of fuel optimal space trajectories for linear differential equations are presented. Both impulsive and bounded-thrust problems are treated. A new form of the Lawden Primer vector is found that is identical for both problems. For this reason, starting iteratives from the solution of the impulsive problem are highly effective in the solution of the two-point boundary-value problem associated with bounded thrust. These results were applied to the problem of fuel optimal maneuvers of a spacecraft near a satellite in circular orbit using the Clohessy-Wiltshire equations. For this case two-point boundary-value problems were solved using a microcomputer, and optimal trajectory shapes displayed. The results of this theory can also be applied if the satellite is in an arbitrary Keplerian orbit through the use of the Tschauner-Hempel equations. A new form of the solution of these equations has been found that is identical for elliptical, parabolic, and hyperbolic orbits except in the way that a certain integral is evaluated. For elliptical orbits this integral is evaluated through the use of the eccentric anomaly. An analogous evaluation is performed for hyperbolic orbits.

  19. An optimization method for condition based maintenance of aircraft fleet considering prognostics uncertainty.

    PubMed

    Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.

  20. A seismic risk for the lunar base

    NASA Technical Reports Server (NTRS)

    Oberst, Juergen; Nakamura, Yosio

    1992-01-01

    Shallow moonquakes, which were discovered during observations following the Apollo lunar landing missions, may pose a threat to lunar surface operations. The nature of these moonquakes is similar to that of intraplate earthquakes, which include infrequent but destructive events. Therefore, there is a need for detailed study to assess the possible seismic risk before establishing a lunar base.

  1. Ant colony optimization-based firewall anomaly mitigation engine.

    PubMed

    Penmatsa, Ravi Kiran Varma; Vatsavayi, Valli Kumari; Samayamantula, Srinivas Kumar

    2016-01-01

    A firewall is the most essential component of network perimeter security. Due to human error and the involvement of multiple administrators in configuring firewall rules, there exist common anomalies in firewall rulesets such as Shadowing, Generalization, Correlation, and Redundancy. There is a need for research on efficient ways of resolving such anomalies. The challenge is also to see that the reordered or resolved ruleset conforms to the organization's framed security policy. This study proposes an ant colony optimization (ACO)-based anomaly resolution and reordering of firewall rules called ACO-based firewall anomaly mitigation engine. Modified strategies are also introduced to automatically detect these anomalies and to minimize manual intervention of the administrator. Furthermore, an adaptive reordering strategy is proposed to aid faster reordering when a new rule is appended. The proposed approach was tested with different firewall policy sets. The results were found to be promising in terms of the number of conflicts resolved, with minimal availability loss and marginal security risk. This work demonstrated the application of a metaheuristic search technique, ACO, in improving the performance of a packet-filter firewall with respect to mitigating anomalies in the rules, and at the same time demonstrated conformance to the security policy. PMID:27441151

  2. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  3. Breast Cancer-Related Arm Lymphedema: Incidence Rates, Diagnostic Techniques, Optimal Management and Risk Reduction Strategies

    SciTech Connect

    Shah, Chirag; Vicini, Frank A.

    2011-11-15

    As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.

  4. Credit risk evaluation based on social media.

    PubMed

    Yang, Yang; Gu, Jing; Zhou, Zongfang

    2016-07-01

    Social media has been playing an increasingly important role in the sharing of individuals' opinions on many financial issues, including credit risk in investment decisions. This paper analyzes whether these opinions, which are transmitted through social media, can accurately predict enterprises' future credit risk. We consider financial statements oriented evaluation results based on logit and probit approaches as the benchmarks. We then conduct textual analysis to retrieve both posts and their corresponding commentaries published on two of the most popular social media platforms for financial investors in China. Professional advice from financial analysts is also investigated in this paper. We surprisingly find that the opinions extracted from both posts and commentaries surpass opinions of analysts in terms of credit risk prediction. PMID:26739372

  5. Credit risk evaluation based on social media.

    PubMed

    Yang, Yang; Gu, Jing; Zhou, Zongfang

    2016-07-01

    Social media has been playing an increasingly important role in the sharing of individuals' opinions on many financial issues, including credit risk in investment decisions. This paper analyzes whether these opinions, which are transmitted through social media, can accurately predict enterprises' future credit risk. We consider financial statements oriented evaluation results based on logit and probit approaches as the benchmarks. We then conduct textual analysis to retrieve both posts and their corresponding commentaries published on two of the most popular social media platforms for financial investors in China. Professional advice from financial analysts is also investigated in this paper. We surprisingly find that the opinions extracted from both posts and commentaries surpass opinions of analysts in terms of credit risk prediction.

  6. Hybrid and adaptive meta-model-based global optimization

    NASA Astrophysics Data System (ADS)

    Gu, J.; Li, G. Y.; Dong, Z.

    2012-01-01

    As an efficient and robust technique for global optimization, meta-model-based search methods have been increasingly used in solving complex and computation intensive design optimization problems. In this work, a hybrid and adaptive meta-model-based global optimization method that can automatically select appropriate meta-modelling techniques during the search process to improve search efficiency is introduced. The search initially applies three representative meta-models concurrently. Progress towards a better performing model is then introduced by selecting sample data points adaptively according to the calculated values of the three meta-models to improve modelling accuracy and search efficiency. To demonstrate the superior performance of the new algorithm over existing search methods, the new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization example involving vehicle crash simulation. The method is particularly suitable for design problems involving computation intensive, black-box analyses and simulations.

  7. RISK AND RISK ASSESSMENT IN WATER-BASED RECREATION

    EPA Science Inventory

    The great number of individuals using recreational water resources presents a challenge with regard to protecting the health of these recreationists. Risk assessment provides a framework for characterizing the risk associated with exposure to microbial hazards and for managing r...

  8. Risk-based Classification of Incidents

    NASA Technical Reports Server (NTRS)

    Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.

    2003-01-01

    As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.

  9. Experimental eavesdropping based on optimal quantum cloning.

    PubMed

    Bartkiewicz, Karol; Lemr, Karel; Cernoch, Antonín; Soubusta, Jan; Miranowicz, Adam

    2013-04-26

    The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.

  10. Experimental eavesdropping based on optimal quantum cloning.

    PubMed

    Bartkiewicz, Karol; Lemr, Karel; Cernoch, Antonín; Soubusta, Jan; Miranowicz, Adam

    2013-04-26

    The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning. PMID:23679725

  11. A Risk-Based Sensor Placement Methodology

    SciTech Connect

    Lee, Ronald W; Kulesz, James J

    2006-08-01

    A sensor placement methodology is proposed to solve the problem of optimal location of sensors or detectors to protect population against the exposure to and effects of known and/or postulated chemical, biological, and/or radiological threats. Historical meteorological data are used to characterize weather conditions as wind speed and direction pairs with the percentage of occurrence of the pairs over the historical period. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate population at risk against standard exposure levels. Sensor locations are determined via a dynamic programming algorithm where threats captured or detected by sensors placed in prior stages are removed from consideration in subsequent stages. Moreover, the proposed methodology provides a quantification of the marginal utility of each additional sensor or detector. Thus, the criterion for halting the iterative process can be the number of detectors available, a threshold marginal utility value, or the cumulative detection of a minimum factor of the total risk value represented by all threats.

  12. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  13. Game theory and risk-based leveed river system planning with noncooperation

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Lund, Jay R.; Madani, Kaveh

    2016-01-01

    Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.

  14. Two-level optimization of composite wing structures based on panel genetic optimization

    NASA Astrophysics Data System (ADS)

    Liu, Boyang

    load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.

  15. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  16. Put risk-based remediation to work

    SciTech Connect

    Johl, C.J.; Feldman, L.; Rafferty, M.T.

    1995-09-01

    Risk-based site cleanups are gaining prominence in environmental remediation. In particular, the ``brownfields`` program in the US--designed to promote the redevelopment of contaminated industrial sites rather than the development of pristine sites--is bringing this new remediation approach to the forefront on a national basis. The traditional approach to remediating a contaminated site is dubbed the remedial investigation and feasibility study (RI-FS) approach. Using an RI-FS approach, site operators and environmental consultants conduct a complete site characterization, using extensive air, water and soil sampling, and then evaluate all potential remediation alternatives. In many cases, the traditional remediation goal has been to return contaminant levels to background or ``non-detect`` levels--with little or no regard to the potential future use of the site. However, with cleanup costs on the rise, and a heightened awareness of the ``how clean is clean`` debate, nay are beginning to view the RI-FS approach as excessive. By comparison, the goal for a focused, risk-based site remediation is to protect human health and the environment in a manner that is consistent with the planned use of the site. Compared to a standard RI-FS cleanup, the newer method can save time and money, by prioritizing site-restoration activities based on risk analysis. A comparison of the to approaches for metals-laden soil is presented.

  17. Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT

    PubMed Central

    2009-01-01

    Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT) of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies < 2%, while the fsPB dose engine showed discrepancies of up to 8% in the presence of lateral electron disequilibrium in the target. In the patient plan optimization, this translates into violations of organ at risk constraints and unpredictable target doses for the fsPB optimized plans. For the 4D MC recalculated dose distribution, MC optimized plans always underestimate the target doses, but the organ at risk doses were comparable. The results depend on the static patient model, and the smallest discrepancy was found for the MC optimized plan on the density overwrite one phase static CT model. Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe. PMID:20003380

  18. Simulation of Evapotranspiration using an Optimality-based Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, Lajiao

    2014-05-01

    Accurate estimation of evapotranspiration (ET) is essential in understanding the effect of climate change and human activities on ecosystem and water resource. As an important tool for ET estimation, most of the traditional hydrological or ecohydrological models treat ET as a physical process, controlled by energy, vapor, pressure and turbulence. It is at times questionable as transpiration, major component of ET, is biological activity closely linked to photosynthesis by stomatal conductivity. Optimality-based ecohydrological models consider the mutual interaction of ET and photosynthesis based on optimality principle. However, as a rising generation of ecohydrological models, so far there are only a few applications of the optimality-based model in different ecosystems. The ability and reliability of this kind of models for ecohydrological modeling need to be validated in more ecosystems. The objective of this study is to validate the optimality hypothesis for water-limited ecosystem. To achieve this, the study applied an optimality-based model Vegetation Optimality Model (VOM) to simulate ET and its components based on optimality principle. The model is applied in a semiarid watershed. The simulated ET and soil waster were compared with long term measurement data in Kendall and Lcukyhill sites in the watershed. The result showed that the temporal variations of simulated ET and soil water are in good agreement with observed data. Temporal dynamic of soil evaporation and transpiration and their response to precipitation events can be well captured with the model. This could come to a conclusion the optimality-based ecohydrological model could be a potential approach to simulate ET.

  19. Trading risk and performance for engineering design optimization using multifidelity analyses

    NASA Astrophysics Data System (ADS)

    Rajnarayan, Dev Gorur

    Computers pervade our lives today: from communication to calculation, their influence percolates many spheres of our existence. With continuing advances in computing, simulations are becoming increasingly complex and accurate. Powerful high-fidelity simulations mimic and predict a variety of real-life scenarios, with applications ranging from entertainment to engineering. The most accurate of such engineering simulations come at a high cost in terms of computing resources and time. Engineers use such simulations to predict the real-world performance of products they design; that is, they use them for analysis. Needless to say, the emphasis is on accuracy of the prediction. For such analysis, one would like to use the most accurate simulation available, and such a simulation is likely to be at the limits of available computing power, quite independently of advances in computing. In engineering design, however, the goal is somewhat different. Engineering design is generally posed as an optimization problem, where the goal is to tweak a set of available inputs or parameters, called design variables, to create a design that is optimal in some way, and meets some preset requirements. In other words, we would like modify the design variables in order to optimize some figure of merit, called an objective function, subject to a set of constraints, typically formulated as equations or inequalities to be satisfied. Typically, a complex engineering system such as an aircraft is described by thousands of design variables, all of which are optimized during the design process. Nevertheless, do we always need to use the highest-fidelity simulations as the objective function and constraints for engineering design? Or can we afford to use lower-fidelity simulations with appropriate corrections? In this thesis, we present a new methodology for surrogate-based optimization. Existing methods combine the possibility erroneous predictions of the low-fidelity surrogate with estimates of

  20. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  1. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method. PMID:20968351

  2. Arsenic speciation driving risk based corrective action.

    PubMed

    Marlborough, Sidney J; Wilson, Vincent L

    2015-07-01

    The toxicity of arsenic depends on a number of factors including its valence state. The more potent trivalent arsenic [arsenite (As3+)] inhibits a large number of cellular enzymatic pathways involved in energy production, while the less toxic pentavalent arsenic [arsenate (As5+)] interferes with phosphate metabolism, phosphoproteins and ATP formation (uncoupling of oxidative phosphorylation). Environmental risk based corrective action for arsenic contamination utilizes data derived from arsenite studies of toxicity to be conservative. However, depending upon environmental conditions, the arsenate species may predominate substantially, especially in well aerated surface soils. Analyses of soil concentrations of arsenic species at two sites in northeastern Texas historically contaminated with arsenical pesticides yielded mean arsenate concentrations above 90% of total arsenic with the majority of the remainder being the trivalent arsenite species. Ecological risk assessments based on the concentration of the trivalent arsenite species will lead to restrictive remediation requirements that do not adequately reflect the level of risk associated with the predominate species of arsenic found in the soil. The greater concentration of the pentavalent arsenate species in soils would be the more appropriate species to monitor remediation at sites that contain high arsenate to arsenite ratios.

  3. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  4. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  5. The role of hope and optimism in suicide risk for American Indians/Alaska Natives.

    PubMed

    O'Keefe, Victoria M; Wingate, LaRicka R

    2013-12-01

    There are some American Indian/Alaska Native communities that exhibit high rates of suicide. The interpersonal theory of suicide (Joiner, 2005) posits that lethal suicidal behavior is likely preceded by the simultaneous presence of thwarted belongingness, perceived burdensomeness, and acquired capability. Past research has shown that hope and optimism are negatively related to suicidal ideation, some of the constructs in the interpersonal theory of suicide, and suicide risk for the general population. This is the first study to investigate hope and optimism in relation to suicidal ideation, thwarted belongingness, perceived burdensomeness, and acquired capability for American Indians/Alaska Natives. Results showed that hope and optimism negatively predicted thwarted belongingness, perceived burdensomeness, and suicidal ideation. However, these results were not found for acquired capability. Overall, this study suggests that higher levels of hope and optimism are associated with lower levels of suicidal ideation, thwarted belongingness, and perceived burdensomeness in this American Indian/Alaska Native sample.

  6. [Physical process based risk assessment of groundwater pollution in the mining area].

    PubMed

    Sun, Fa-Sheng; Cheng, Pin; Zhang, Bo

    2014-04-01

    Case studies of groundwater pollution risk assessment at home and abroad generally start from groundwater vulnerability, without considering the influence of characteristic pollutants on the consequences of pollution too much. Vulnerability is the natural sensitivity of the environment to pollutants. Risk assessment of groundwater pollution should reflect the movement and distribution of pollutants in groundwater. In order to improve the risk assessment theory and method of groundwater pollution, a physical process based risk assessment methodology for groundwater pollution was proposed in a mining area. According to the sensitivity of the economic and social conditions and the possible distribution of pollutants in the future, the spatial distribution of risk levels in aquifer was ranged before hand, and the pollutant source intensity corresponding to each risk level was deduced accordingly. By taking it as the criterion for the classification of groundwater pollution risk assessment, the groundwater pollution risk in the mining area was evaluated by simulating the migration of pollutants in the vadose zone and aquifer. The result show that the risk assessment method of groundwater pollution based on physical process can give the concentration distribution of pollutants and the risk level in the spatial and temporal. For single punctuate polluted area, it gives detailed risk characterization, which is better than the risk assessment method that based on aquifer intrinsic vulnerability index, and it is applicable to the risk assessment of existing polluted sites, optimizing the future sites and providing design parameters for the site construction.

  7. Optimized PCR-based detection of mycoplasma.

    PubMed

    Dobrovolny, Paige L; Bess, Dan

    2011-06-20

    The maintenance of contamination-free cell lines is essential to cell-based research. Among the biggest contaminant concerns are mycoplasma contamination. Although mycoplasma do not usually kill contaminated cells, they are difficult to detect and can cause a variety of effects on cultured cells, including altered metabolism, slowed proliferation and chromosomal aberrations. In short, mycoplasma contamination compromises the value of those cell lines in providing accurate data for life science research. The sources of mycoplasma contamination in the laboratory are very challenging to completely control. As certain mycoplasma species are found on human skin, they can be introduced through poor aseptic technique. Additionally, they can come from contaminated supplements such as fetal bovine serum, and most importantly from other contaminated cell cultures. Once mycoplasma contaminates a culture, it can quickly spread to contaminate other areas of the lab. Strict adherence to good laboratory practices such as good aseptic technique are key, and routine testing for mycoplasma is highly recommended for successful control of mycoplasma contamination. PCR-based detection of mycoplasma has become a very popular method for routine cell line maintenance. PCR-based detection methods are highly sensitive and can provide rapid results, which allows researchers to respond quickly to isolate and eliminate contamination once it is detected in comparison to the time required using microbiological techniques. The LookOut Mycoplasma PCR Detection Kit is highly sensitive, with a detection limit of only 2 genomes per μl. Taking advantage of the highly specific JumpStart Taq DNA Polymerase and a proprietary primer design, false positives are greatly reduced. The convenient 8-tube format, strips pre-coated with dNTPs, and associated primers helps increase the throughput to meet the needs of customers with larger collections of cell lines. Given the extreme sensitivity of the kit, great

  8. Hybrid optimization schemes for simulation-based problems.

    SciTech Connect

    Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.

    2010-05-01

    The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.

  9. 78 FR 76521 - Risk-Based Capital Guidelines; Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-18

    ... capital adequacy and risk profile. In those cases, a banking organization must disclose the general nature... no longer reflective of the bank's capital adequacy and risk profile, then a brief discussion of this... definition of ``Covered position'' to read as follows: Appendix E to Part 225--Capital Adequacy...

  10. 78 FR 43829 - Risk-Based Capital Guidelines; Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-22

    ... Organization for Economic Cooperation and Development (OECD), which are referenced in the Board's market risk... transparency through enhanced disclosures. \\1\\ 77 FR 53060 (August 30, 2012). The agencies' market risk rules... additional detail on this history in the preamble to the August 2012 final rule. See, 77 FR 53060,...

  11. Glycemia and cardiovascular risk: challenging evidence based medicine

    PubMed Central

    Kitsios, K; Tsapas, A; Karagianni, P

    2011-01-01

    Optimal glycemic control is well known to reduce effectively the risk of micro vascular complications both in type 1 and type 2 diabetes mellitus. However the role of glycemic control in decreasing the risk of myocardial infarction and ischemic stroke, the leading causes of death in patients with diabetes, has been so far controversial. In this review, based on data recently reported from large interventional studies, we discuss the possible causal relationship between glycemia and cardiovascular outcomes in type 1 and type 2 diabetes. Strict glycemic control right from the diagnosis of the disease may be effective in reducing long term incidence of cardiovascular (CV) disease in both T1 and T2 diabetics. Nevertheless such a strategy could be potentially harmful for T2 diabetics with long duration of sub optimal glycemic control and already established CV complications. Treatment targets in these patients should be individualized taking into account other aspects of glycemic control and diabetes complications such as hypoglycemia and autonomic neuropathy. PMID:22435015

  12. Optimal ''image-based'' weighting for energy-resolved CT

    SciTech Connect

    Schmidt, Taly Gilat

    2009-07-15

    This paper investigates a method of reconstructing images from energy-resolved CT data with negligible beam-hardening artifacts and improved contrast-to-nosie ratio (CNR) compared to conventional energy-weighting methods. Conceptually, the investigated method first reconstructs separate images from each energy bin. The final image is a linear combination of the energy-bin images, with the weights chosen to maximize the CNR in the final image. The optimal weight of a particular energy-bin image is derived to be proportional to the contrast-to-noise-variance ratio in that image. The investigated weighting method is referred to as ''image-based'' weighting, although, as will be described, the weights can be calculated and the energy-bin data combined prior to reconstruction. The performance of optimal image-based energy weighting with respect to CNR and beam-hardening artifacts was investigated through simulations and compared to that of energy integrating, photon counting, and previously studied optimal ''projection-based'' energy weighting. Two acquisitions were simulated: dedicated breast CT and a conventional thorax scan. The energy-resolving detector was simulated with five energy bins. Four methods of estimating the optimal weights were investigated, including task-specific and task-independent methods and methods that require a single reconstruction versus multiple reconstructions. Results demonstrated that optimal image-based weighting improved the CNR compared to energy-integrating weighting by factors of 1.15-1.6 depending on the task. Compared to photon-counting weighting, the CNR improvement ranged from 1.0 to 1.3. The CNR improvement factors were comparable to those of projection-based optimal energy weighting. The beam-hardening cupping artifact increased from 5.2% for energy-integrating weighting to 12.8% for optimal projection-based weighting, while optimal image-based weighting reduced the cupping to 0.6%. Overall, optimal image-based energy weighting

  13. Stackelberg Game of Buyback Policy in Supply Chain with a Risk-Averse Retailer and a Risk-Averse Supplier Based on CVaR

    PubMed Central

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605

  14. Stackelberg game of buyback policy in supply chain with a risk-averse retailer and a risk-averse supplier based on CVaR.

    PubMed

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605

  15. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  16. Fatigue reliability based optimal design of planar compliant micropositioning stages.

    PubMed

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  17. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  18. Fatigue reliability based optimal design of planar compliant micropositioning stages.

    PubMed

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach. PMID:26520994

  19. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  20. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  1. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the

  2. Optimization algorithm based characterization scheme for tunable semiconductor lasers.

    PubMed

    Chen, Quanan; Liu, Gonghai; Lu, Qiaoyin; Guo, Weihua

    2016-09-01

    In this paper, an optimization algorithm based characterization scheme for tunable semiconductor lasers is proposed and demonstrated. In the process of optimization, the ratio between the power of the desired frequency and the power except of the desired frequency is used as the figure of merit, which approximately represents the side-mode suppression ratio. In practice, we use tunable optical band-pass and band-stop filters to obtain the power of the desired frequency and the power except of the desired frequency separately. With the assistance of optimization algorithms, such as the particle swarm optimization (PSO) algorithm, we can get stable operation conditions for tunable lasers at designated frequencies directly and efficiently. PMID:27607701

  3. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  4. Planar straightness error evaluation based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Zheng, Huawen; Cao, Yanlong; Yang, Jiangxin

    2006-11-01

    The straightness error generally refers to the deviation between an actual line and an ideal line. According to the characteristics of planar straightness error evaluation, a novel method to evaluate planar straightness errors based on the particle swarm optimization (PSO) is proposed. The planar straightness error evaluation problem is formulated as a nonlinear optimization problem. According to minimum zone condition the mathematical model of planar straightness together with the optimal objective function and fitness function is developed. Compared with the genetic algorithm (GA), the PSO algorithm has some advantages. It is not only implemented without crossover and mutation but also has fast congruence speed. Moreover fewer parameters are needed to set up. The results show that the PSO method is very suitable for nonlinear optimization problems and provides a promising new method for straightness error evaluation. It can be applied to deal with the measured data of planar straightness obtained by the three-coordinates measuring machines.

  5. Measurement matrix optimization method based on matrix orthogonal similarity transformation

    NASA Astrophysics Data System (ADS)

    Pan, Jinfeng

    2016-05-01

    Optimization of the measurement matrix is one of the important research aspects of compressive sensing theory. A measurement matrix optimization method is presented based on the orthogonal similarity transformation of the information operator's Gram matrix. In terms of the fact that the information operator's Gram matrix is a singular symmetric matrix, a simplified orthogonal similarity transformation is deduced, and thus the simplified diagonal matrix that is orthogonally similar to it is obtained. Then an approximation of the Gram matrix is obtained by letting all the nonzero diagonal entries of the simplified diagonal matrix equal their average value. Thus an optimized measurement matrix can be acquired according to its relationship with the information operator. Results of experiments show that the optimized measurement matrix compared to the random measurement matrix is less coherent with dictionaries. The relative signal recovery error also declines when the proposed measurement matrix is utilized.

  6. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357

  7. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  8. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  9. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  10. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  11. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  12. Adjoint-based airfoil shape optimization in transonic flow

    NASA Astrophysics Data System (ADS)

    Gramanzini, Joe-Ray

    The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.

  13. Physiologically based pharmacokinetics and cancer risk assessment.

    PubMed Central

    Andersen, M E; Krishnan, K

    1994-01-01

    Physiologically based pharmacokinetic (PBPK) modeling involves mathematically describing the complex interplay of the critical physicochemical and biological determinants involved in the disposition of chemicals. In this approach, the body is divided into a number of biologically relevant tissue compartments, arranged in an anatomically accurate manner, and defined with appropriate physiological characteristics. The extrapolation of pharmacokinetic behavior of chemicals from high dose to low dose for various exposure routes and species is possible with this approach because these models are developed by integrating quantitative information on the critical determinants of chemical disposition under a biological modeling framework. The principal application of PBPK models is in the prediction of tissue dosimetry of toxic moiety (e.g., parent chemical, reactive metabolite, macromolecular adduct) of a chemical. Such an application has been demonstrated with dichloromethane, a liver and lung carcinogen in the B6C3F1 mouse. The PBPK model-based risk assessment approach estimated a cancer risk to people of 3.7 x 10(-8) for a lifetime inhalation exposure of 1 micrograms/m3, which is lower by more than two orders of magnitude than that calculated by the U.S. Environmental Protection Agency using the linearized multistage model (for low-dose extrapolation) and body surface correction factor (for interspecies scaling). The capability of predicting the target tissue exposure to toxic moiety in people with PBPK models should help reduce the uncertainty associated with the extrapolation procedures adopted in conventional dose-response assessment. PMID:8187697

  14. ODVBA: optimally-discriminative voxel-based analysis.

    PubMed

    Zhang, Tianhao; Davatzikos, Christos

    2011-08-01

    Gaussian smoothing of images prior to applying voxel-based statistics is an important step in voxel-based analysis and statistical parametric mapping (VBA-SPM) and is used to account for registration errors, to Gaussianize the data and to integrate imaging signals from a region around each voxel. However, it has also become a limitation of VBA-SPM based methods, since it is often chosen empirically and lacks spatial adaptivity to the shape and spatial extent of the region of interest, such as a region of atrophy or functional activity. In this paper, we propose a new framework, named optimally-discriminative voxel-based analysis (ODVBA), for determining the optimal spatially adaptive smoothing of images, followed by applying voxel-based group analysis. In ODVBA, nonnegative discriminative projection is applied regionally to get the direction that best discriminates between two groups, e.g., patients and controls; this direction is equivalent to local filtering by an optimal kernel whose coefficients define the optimally discriminative direction. By considering all the neighborhoods that contain a given voxel, we then compose this information to produce the statistic for each voxel. Finally, permutation tests are used to obtain a statistical parametric map of group differences. ODVBA has been evaluated using simulated data in which the ground truth is known and with data from an Alzheimer's disease (AD) study. The experimental results have shown that the proposed ODVBA can precisely describe the shape and location of structural abnormality.

  15. Reliability-based design optimization under stationary stochastic process loads

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Du, Xiaoping

    2016-08-01

    Time-dependent reliability-based design ensures the satisfaction of reliability requirements for a given period of time, but with a high computational cost. This work improves the computational efficiency by extending the sequential optimization and reliability analysis (SORA) method to time-dependent problems with both stationary stochastic process loads and random variables. The challenge of the extension is the identification of the most probable point (MPP) associated with time-dependent reliability targets. Since a direct relationship between the MPP and reliability target does not exist, this work defines the concept of equivalent MPP, which is identified by the extreme value analysis and the inverse saddlepoint approximation. With the equivalent MPP, the time-dependent reliability-based design optimization is decomposed into two decoupled loops: deterministic design optimization and reliability analysis, and both are performed sequentially. Two numerical examples are used to show the efficiency of the proposed method.

  16. Pragmatic fluid optimization in high-risk surgery patients: when pragmatism dilutes the benefits.

    PubMed

    Reuter, Daniel A

    2012-01-31

    There is increasing evidence that hemodynamic optimization by fluid loading, particularly when performed in the early phase of surgery, is beneficial in high-risk surgery patients: it leads to a reduction in postoperative complications and even to improved long-term outcome. However, it is also true that goal- directed strategies of fluid optimization focusing on cardiac output optimization have not been applied in the clinical routine of many institutions. Reasons are manifold: disbelief in the level of evidence and on the accuracy and practicability of the required monitoring systems, and economics. The FOCCUS trial examined perioperative fluid optimization with a very basic approach: a standardized volume load with 25 ml/kg crystalloids over 6 hours immediately prior to scheduled surgery in high-risk patients. The hypothesis was that this intervention would lead to a compensation of preoperative fluid deficit caused by overnight fasting, and would result in improved perioperative fluid homeostasis with less postoperative complications and earlier hospital discharge. However, the primary study endpoints did not improve significantly. This observation points towards the facts that: firstly, the differentiation between interstitial fluid deficit caused by fasting and intravascular volume loss due to acute blood loss must be recognized in treatment strategies; secondly, the type of fluid replacement may play an important role; and thirdly, protocolized treatment strategies should also always be tailored to suit the patients' individual needs in every individual clinical situation.

  17. Electrochemical model based charge optimization for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Pramanik, Sourav; Anwar, Sohel

    2016-05-01

    In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.

  18. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  19. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  20. Information fusion based optimal control for large civil aircraft system.

    PubMed

    Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen

    2015-03-01

    Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase.

  1. The Integrated Medical Model - Optimizing In-flight Space Medical Systems to Reduce Crew Health Risk and Mission Impacts

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy

    2009-01-01

    The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of

  2. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  3. Optimization of Polarimetric Contrast Enhancement Based on Fisher Criterion

    NASA Astrophysics Data System (ADS)

    Deng, Qiming; Chen, Jiong; Yang, Jian

    The optimization of polarimetric contrast enhancement (OPCE) is a widely used method for maximizing the received power ratio of a desired target versus an undesired target (clutter). In this letter, a new model of the OPCE is proposed based on the Fisher criterion. By introducing the well known two-class problem of linear discriminant analysis (LDA), the proposed model is to enlarge the normalized distance of mean value between the target and the clutter. In addition, a cross-iterative numerical method is proposed for solving the optimization with a quadratic constraint. Experimental results with the polarimetric SAR (POLSAR) data demonstrate the effectiveness of the proposed method.

  4. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  5. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  6. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Risk-based capital level. 652.70 Section 652.70 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FEDERAL AGRICULTURAL MORTGAGE... risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  7. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  8. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  9. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  10. Bare-bones teaching-learning-based optimization.

    PubMed

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844

  11. Bare-bones teaching-learning-based optimization.

    PubMed

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.

  12. Bare-Bones Teaching-Learning-Based Optimization

    PubMed Central

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844

  13. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  14. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  15. A danger-theory-based immune network optimization algorithm.

    PubMed

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  16. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    SciTech Connect

    Huang, Weihong; Sun, Kai; Qi, Junjian; Xu, Yan

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-bus system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.

  17. Similarity-based global optimization of buildings in urban scene

    NASA Astrophysics Data System (ADS)

    Zhu, Quansheng; Zhang, Jing; Jiang, Wanshou

    2013-10-01

    In this paper, an approach for the similarity-based global optimization of buildings in urban scene is presented. In the past, most researches concentrated on single building reconstruction, making it difficult to reconstruct reliable models from noisy or incomplete point clouds. To obtain a better result, a new trend is to utilize the similarity among the buildings. Therefore, a new similarity detection and global optimization strategy is adopted to modify local-fitting geometric errors. Firstly, the hierarchical structure that consists of geometric, topological and semantic features is constructed to represent complex roof models. Secondly, similar roof models can be detected by combining primitive structure and connection similarities. At last, the global optimization strategy is applied to preserve the consistency and precision of similar roof structures. Moreover, non-local consolidation is adapted to detect small roof parts. The experiments reveal that the proposed method can obtain convincing roof models and promote the reconstruction quality of 3D buildings in urban scene.

  18. Level set based structural topology optimization for minimizing frequency response

    NASA Astrophysics Data System (ADS)

    Shu, Lei; Wang, Michael Yu; Fang, Zongde; Ma, Zhengdong; Wei, Peng

    2011-11-01

    For the purpose of structure vibration reduction, a structural topology optimization for minimizing frequency response is proposed based on the level set method. The objective of the present study is to minimize the frequency response at the specified points or surfaces on the structure with an excitation frequency or a frequency range, subject to the given amount of the material over the admissible design domain. The sensitivity analysis with respect to the structural boundaries is carried out, while the Extended finite element method (X-FEM) is employed for solving the state equation and the adjoint equation. The optimal structure with smooth boundaries is obtained by the level set evolution with advection velocity, derived from the sensitivity analysis and the optimization algorithm. A number of numerical examples, in the frameworks of two-dimension (2D) and three-dimension (3D), are presented to demonstrate the feasibility and effectiveness of the proposed approach.

  19. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  20. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  1. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  2. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  3. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  4. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  5. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  6. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  7. Modification of species-based differential evolution for multimodal optimization

    NASA Astrophysics Data System (ADS)

    Idrus, Said Iskandar Al; Syahputra, Hermawan; Firdaus, Muliawan

    2015-12-01

    At this time optimization has an important role in various fields as well as between other operational research, industry, finance and management. Optimization problem is the problem of maximizing or minimizing a function of one variable or many variables, which include unimodal and multimodal functions. Differential Evolution (DE), is a random search technique using vectors as an alternative solution in the search for the optimum. To localize all local maximum and minimum on multimodal function, this function can be divided into several domain of fitness using niching method. Species-based niching method is one of method that build sub-populations or species in the domain functions. This paper describes the modification of species-based previously to reduce the computational complexity and run more efficiently. The results of the test functions show species-based modifications able to locate all the local optima in once run the program.

  8. Hybrid Biogeography-Based Optimization for Integer Programming

    PubMed Central

    Wang, Zhi-Cheng

    2014-01-01

    Biogeography-based optimization (BBO) is a relatively new bioinspired heuristic for global optimization based on the mathematical models of biogeography. By investigating the applicability and performance of BBO for integer programming, we find that the original BBO algorithm does not perform well on a set of benchmark integer programming problems. Thus we modify the mutation operator and/or the neighborhood structure of the algorithm, resulting in three new BBO-based methods, named BlendBBO, BBO_DE, and LBBO_LDE, respectively. Computational experiments show that these methods are competitive approaches to solve integer programming problems, and the LBBO_LDE shows the best performance on the benchmark problems. PMID:25003142

  9. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    EPA Science Inventory

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  10. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.

  11. An evolutionary based Bayesian design optimization approach under incomplete information

    NASA Astrophysics Data System (ADS)

    Srivastava, Rupesh; Deb, Kalyanmoy

    2013-02-01

    Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.

  12. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  13. The optimal community detection of software based on complex networks

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong

    2016-02-01

    The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.

  14. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  15. Computer Based Porosity Design by Multi Phase Topology Optimization

    NASA Astrophysics Data System (ADS)

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  16. Comparative Pessimism or Optimism: Depressed Mood, Risk-Taking, Social Utility and Desirability.

    PubMed

    Milhabet, Isabelle; Le Barbenchon, Emmanuelle; Cambon, Laurent; Molina, Guylaine

    2015-01-01

    Comparative optimism can be defined as a self-serving, asymmetric judgment of the future. It is often thought to be beneficial and socially accepted, whereas comparative pessimism is correlated with depression and socially rejected. Our goal was to examine the social acceptance of comparative optimism and the social rejection of comparative pessimism in two dimensions of social judgment, social desirability and social utility, considering the attributions of dysphoria and risk-taking potential (studies 2 and 3) on outlooks on the future. In three experiments, the participants assessed either one (study 1) or several (studies 2 and 3) fictional targets in two dimensions, social utility and social desirability. Targets exhibiting comparatively optimistic or pessimistic outlooks on the future were presented as non-depressed, depressed, or neither (control condition) (study 1); non-depressed or depressed (study 2); and non-depressed or in control condition (study 3). Two significant results were obtained: (1) social rejection of comparative pessimism in the social desirability dimension, which can be explained by its depressive feature; and (2) comparative optimism was socially accepted on the social utility dimension, which can be explained by the perception that comparatively optimistic individuals are potential risk-takers.

  17. Risk perception, risk evaluation and human values: cognitive bases of acceptability of a radioactive waste repository

    SciTech Connect

    Earle, T.C.; Lindell, M.K.; Rankin, W.L.

    1981-07-01

    Public acceptance of radioactive waste management alternatives depends in part on public perception of the associated risks. Three aspects of those perceived risks were explored in this study: (1) synthetic measures of risk perception based on judgments of probability and consequences; (2) acceptability of hypothetical radioactive waste policies, and (3) effects of human values on risk perception. Both the work on synthetic measures of risk perception and on the acceptability of hypothetical policies included investigations of three categories of risk: (1) Short-term public risk (affecting persons living when the wastes are created), (2) Long-term public risk (affecting persons living after the time the wastes were created), and (3) Occupational risk (affecting persons working with the radioactive wastes). The human values work related to public risk perception in general, across categories of persons affected. Respondents were selected according to a purposive sampling strategy.

  18. Risk perception, risk evaluation and human values: Cognitive bases acceptability of a radioactive waste repository

    NASA Astrophysics Data System (ADS)

    Earle, T. C.; Lindell, M. K.; Rankin, W. L.; Nealey, S. M.

    1981-07-01

    Public acceptance of radioactive waste management alternatives depends in part on public perception of the associated risks. Three aspects of those perceived risks were explored: (1) synthetic measures of risk perception based on judgments of probability and consequences; (2) acceptability of hypothetical radioactive waste policies; and (3) effects of human values on risk perception. Both the work on synthetic measures of risk perception and on the acceptability of hypothetical policies included investigations of three categories of risk: short term public risk (affecting persons living when the wastes are created), long term public risk (affecting persons living after the time the wastes were created), and occupational risk (affecting persons working with the radioactive wastes). The human values work related to public risk perception in general, across categories of persons affected. Respondents were selected according to a purposive sampling strategy.

  19. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  20. Block-based mask optimization for optical lithography.

    PubMed

    Ma, Xu; Song, Zhiyang; Li, Yanqiu; Arce, Gonzalo R

    2013-05-10

    Pixel-based optical proximity correction (PBOPC) methods have been developed as a leading-edge resolution enhancement technique (RET) for integrated circuit fabrication. PBOPC independently modulates each pixel on the reticle, which tremendously increases the mask's complexity and, at the same time, deteriorates its manufacturability. Most current PBOPC algorithms recur to regularization methods or a mask manufacturing rule check (MRC) to improve the mask manufacturability. Typically, these approaches either fail to satisfy manufacturing constraints on the practical product line, or lead to suboptimal mask patterns that may degrade the lithographic performance. This paper develops a block-based optical proximity correction (BBOPC) algorithm to pursue the optimal masks with manufacturability compliance, where the mask is shaped by a set of overlapped basis blocks rather than pixels. BBOPC optimization is formulated based on a vector imaging model, which is adequate for both dry lithography with lower numerical aperture (NA), and immersion lithography with hyper-NA. The BBOPC algorithm successively optimizes the main features (MF) and subresolution assist features (SRAF) based on a modified conjugate gradient method. It is effective at smoothing any unmanufacturable jogs along edges. A weight matrix is introduced in the cost function to preserve the edge fidelity of the printed images. Simulations show that the BBOPC algorithm can improve lithographic imaging performance while maintaining mask manufacturing constraints. PMID:23669851

  1. [Optimized Spectral Indices Based Estimation of Forage Grass Biomass].

    PubMed

    An, Hai-bo; Li, Fei; Zhao, Meng-li; Liu, Ya-jun

    2015-11-01

    As an important indicator of forage production, aboveground biomass will directly illustrate the growth of forage grass. Therefore, Real-time monitoring biomass of forage grass play a crucial role in performing suitable grazing and management in artificial and natural grassland. However, traditional sampling and measuring are time-consuming and labor-intensive. Recently, development of hyperspectral remote sensing provides the feasibility in timely and nondestructive deriving biomass of forage grass. In the present study, the main objectives were to explore the robustness of published and optimized spectral indices in estimating biomass of forage grass in natural and artificial pasture. The natural pasture with four grazing density (control, light grazing, moderate grazing and high grazing) was designed in desert steppe, and different forage cultivars with different N rate were conducted in artificial forage fields in Inner Mongolia. The canopy reflectance and biomass in each plot were measured during critical stages. The result showed that, due to the influence in canopy structure and biomass, the canopy reflectance have a great difference in different type of forage grass. The best performing spectral index varied in different species of forage grass with different treatments (R² = 0.00-0.69). The predictive ability of spectral indices decreased under low biomass of desert steppe, while red band based spectral indices lost sensitivity under moderate-high biomass of forage maize. When band combinations of simple ratio and normalized difference spectral indices were optimized in combined datasets of natural and artificial grassland, optimized spectral indices significant increased predictive ability and the model between biomass and optimized spectral indices had the highest R² (R² = 0.72) compared to published spectral indices. Sensitive analysis further confirmed that the optimized index had the lowest noise equivalent and were the best performing index in

  2. [Optimized Spectral Indices Based Estimation of Forage Grass Biomass].

    PubMed

    An, Hai-bo; Li, Fei; Zhao, Meng-li; Liu, Ya-jun

    2015-11-01

    As an important indicator of forage production, aboveground biomass will directly illustrate the growth of forage grass. Therefore, Real-time monitoring biomass of forage grass play a crucial role in performing suitable grazing and management in artificial and natural grassland. However, traditional sampling and measuring are time-consuming and labor-intensive. Recently, development of hyperspectral remote sensing provides the feasibility in timely and nondestructive deriving biomass of forage grass. In the present study, the main objectives were to explore the robustness of published and optimized spectral indices in estimating biomass of forage grass in natural and artificial pasture. The natural pasture with four grazing density (control, light grazing, moderate grazing and high grazing) was designed in desert steppe, and different forage cultivars with different N rate were conducted in artificial forage fields in Inner Mongolia. The canopy reflectance and biomass in each plot were measured during critical stages. The result showed that, due to the influence in canopy structure and biomass, the canopy reflectance have a great difference in different type of forage grass. The best performing spectral index varied in different species of forage grass with different treatments (R² = 0.00-0.69). The predictive ability of spectral indices decreased under low biomass of desert steppe, while red band based spectral indices lost sensitivity under moderate-high biomass of forage maize. When band combinations of simple ratio and normalized difference spectral indices were optimized in combined datasets of natural and artificial grassland, optimized spectral indices significant increased predictive ability and the model between biomass and optimized spectral indices had the highest R² (R² = 0.72) compared to published spectral indices. Sensitive analysis further confirmed that the optimized index had the lowest noise equivalent and were the best performing index in

  3. Optimization of positrons generation based on laser wakefield electron acceleration

    NASA Astrophysics Data System (ADS)

    Wu, Yuchi; Han, Dan; Zhang, Tiankui; Dong, Kegong; Zhu, Bin; Yan, Yonghong; Gu, Yuqiu

    2016-08-01

    Laser based positron represents a new particle source with short pulse duration and high charge density. Positron production based on laser wakefield electron acceleration (LWFA) has been investigated theoretically in this paper. Analytical expressions for positron spectra and yield have been obtained through a combination of LWFA and cascade shower theories. The maximum positron yield and corresponding converter thickness have been optimized as a function of driven laser power. Under the optimal condition, high energy (>100 MeV ) positron yield up to 5 ×1011 can be produced by high power femtosecond lasers at ELI-NP. The percentage of positrons shows that a quasineutral electron-positron jet can be generated by setting the converter thickness greater than 5 radiation lengths.

  4. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  5. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  6. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  7. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  8. Process optimization electrospinning fibrous material based on polyhydroxybutyrate

    NASA Astrophysics Data System (ADS)

    Olkhov, A. A.; Tyubaeva, P. M.; Staroverova, O. V.; Mastalygina, E. E.; Popov, A. A.; Ischenko, A. A.; Iordanskii, A. L.

    2016-05-01

    The article analyzes the influence of the main technological parameters of electrostatic spinning on the morphology and properties of ultrathin fibers on the basis of polyhydroxybutyrate. It is found that the electric conductivity and viscosity of the spinning solution affects the process of forming fibers macrostructure. The fiber-based materials PHB lets control geometry and optimize the viscosity and conductivity of a spinning solution. The resulting fibers have found use in medicine, particularly in the construction elements musculoskeletal.

  9. Corrosion risk assessment and risk based inspection for sweet oil and gas corrosion -- Practical experience

    SciTech Connect

    Pursell, M.J.; Sehnan, C.; Naen, M.F.

    1999-11-01

    Successful and cost effective Corrosion Risk Assessment depends on a sensible use of prediction methods and good understanding of process factors. Both are discussed with examples. Practice semi-probabilistic Risk Based Inspection planning methods that measure risk directly as cost and personnel hazard are compared with traditional methods and discussed.

  10. A global optimization paradigm based on change of measures

    PubMed Central

    Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan

    2015-01-01

    A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268

  11. Nanodosimetry-Based Plan Optimization for Particle Therapy

    PubMed Central

    Casiraghi, Margherita; Schulte, Reinhard W.

    2015-01-01

    Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202

  12. Optimal network topology for structural robustness based on natural connectivity

    NASA Astrophysics Data System (ADS)

    Peng, Guan-sheng; Wu, Jun

    2016-02-01

    The structural robustness of the infrastructure of various real-life systems, which can be represented by networks, is of great importance. Thus we have proposed a tabu search algorithm to optimize the structural robustness of a given network by rewiring the links and fixing the node degrees. The objective of our algorithm is to maximize a new structural robustness measure, natural connectivity, which provides a sensitive and reliable measure of the structural robustness of complex networks and has lower computation complexity. We initially applied this method to several networks with different degree distributions for contrast analysis and investigated the basic properties of the optimal network. We discovered that the optimal network based on the power-law degree distribution exhibits a roughly "eggplant-like" topology, where there is a cluster of high-degree nodes at the head and other low-degree nodes scattered across the body of "eggplant". Additionally, the cost to rewire links in practical applications is considered; therefore, we optimized this method by employing the assortative rewiring strategy and validated its efficiency.

  13. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  14. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  15. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  16. Credibility theory based dynamic control bound optimization for reservoir flood limited water level

    NASA Astrophysics Data System (ADS)

    Jiang, Zhiqiang; Sun, Ping; Ji, Changming; Zhou, Jianzhong

    2015-10-01

    The dynamic control operation of reservoir flood limited water level (FLWL) can solve the contradictions between reservoir flood control and beneficial operation well, and it is an important measure to make sure the security of flood control and realize the flood utilization. The dynamic control bound of FLWL is a fundamental key element for implementing reservoir dynamic control operation. In order to optimize the dynamic control bound of FLWL by considering flood forecasting error, this paper took the forecasting error as a fuzzy variable, and described it with the emerging credibility theory in recent years. By combining the flood forecasting error quantitative model, a credibility-based fuzzy chance constrained model used to optimize the dynamic control bound was proposed in this paper, and fuzzy simulation technology was used to solve the model. The FENGTAN reservoir in China was selected as a case study, and the results show that, compared with the original operation water level, the initial operation water level (IOWL) of FENGTAN reservoir can be raised 4 m, 2 m and 5.5 m respectively in the three division stages of flood season, and without increasing flood control risk. In addition, the rationality and feasibility of the proposed forecasting error quantitative model and credibility-based dynamic control bound optimization model are verified by the calculation results of extreme risk theory.

  17. Probabilistic risk assessment techniques help in identifying optimal equipment design for in-situ vitrification

    SciTech Connect

    Lucero, V.; Meale, B.M.; Purser, F.E.

    1990-01-01

    The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.

  18. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.

    PubMed

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785

  19. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.

    PubMed

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO.

  20. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization

    PubMed Central

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785

  1. Managing simulation-based training: A framework for optimizing learning, cost, and time

    NASA Astrophysics Data System (ADS)

    Richmond, Noah Joseph

    This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.

  2. Biological Based Risk Assessment for Space Exploration

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2011-01-01

    Exposures from galactic cosmic rays (GCR) - made up of high-energy protons and high-energy and charge (HZE) nuclei, and solar particle events (SPEs) - comprised largely of low- to medium-energy protons are the primary health concern for astronauts for long-term space missions. Experimental studies have shown that HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation, making risk assessments for cancer and degenerative risks, such as central nervous system effects and heart disease, highly uncertain. The goal for space radiation protection at NASA is to be able to reduce the uncertainties in risk assessments for Mars exploration to be small enough to ensure acceptable levels of risks are not exceeded and to adequately assess the efficacy of mitigation measures such as shielding or biological countermeasures. We review the recent BEIR VII and UNSCEAR-2006 models of cancer risks and their uncertainties. These models are shown to have an inherent 2-fold uncertainty as defined by ratio of the 95% percent confidence level to the mean projection, even before radiation quality is considered. In order to overcome the uncertainties in these models, new approaches to risk assessment are warranted. We consider new computational biology approaches to modeling cancer risks. A basic program of research that includes stochastic descriptions of the physics and chemistry of radiation tracks and biochemistry of metabolic pathways, to emerging biological understanding of cellular and tissue modifications leading to cancer is described.

  3. Stochastically optimized monocular vision-based navigation and guidance

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoko

    -effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and

  4. Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2012-03-01

    In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.

  5. Divergent nematic susceptibility of optimally doped Fe-based superconductors

    NASA Astrophysics Data System (ADS)

    Chu, Jiun-Haw; Kuo, Hsueh-Hui; Fisher, Ian

    2015-03-01

    By performing differential elastoresistivity measurements on a wider range of iron based superconductors, including electron doped (Ba(Fe1-xCox)2As2, Ba(Fe1-xNix)2As2),holedoped(Ba1-xKxFe2As2), isovalent substituted pnictides (BaFe2(As1-xPx)2) and chalcogenides (FeTe1-xSex), we show that a divergent nematic susceptibility in the B2g symmetry channel appears to be a generic feature of optimally doped compositions. For the specific case of optimally ``doped'' BaFe2(As1-xPx)2, the nematic susceptibility can be well fitted by a Curie-Weiss temperature dependence with critical temperature close to zero, consistent with expectations of quantum critical behavior in the absence of disorder. However for all the other optimal doped iron based superconductors, the nematic susceptibility exhibits a downward deviation from Curie-Weiss behavior, suggestive of an important role played by disorder.

  6. Blunt-body drag reduction through base cavity shape optimization

    NASA Astrophysics Data System (ADS)

    Lorite-Díez, Manuel; Jiménez-González, José Ignacio; Gutiérrez-Montes, Cándido; Martínez-Bazán, Carlos

    2015-11-01

    We present a numerical study on the drag reduction of a turbulent incompressible flow around two different blunt bodies, of height H and length L, at a Reynolds number Re = ρU∞ H / μ = 2000 , where U∞ is the turbulent incompressible free-stream velocity, ρ is their density and μ their viscosity. The study is based on the optimization of the geometry of a cavity placed at the rear part of the body with the aim of increasing the base pressure. Thus, we have used an optimization algorithm, which implements the adjoint method, to compute the two-dimensional incompressible turbulent steady flow sensitivity field of axial forces on both bodies, and consequently modify the shape of the cavity to reduce the induced drag force. In addition, we have performed three dimensional numerical simulations using an IDDES model in order to analyze the drag reduction effect of the optimized cavities at higher Reynolds numbers.The results show average drag reductions of 17 and 25 % for Re=2000, as well as more regularized and less chaotic wake flows in both bodies. Supported by the Spanish MINECO, Junta de Andalucía and EU Funds under projects DPI2014-59292-C3-3-P and P11-TEP7495.

  7. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  8. Model-based optimization of tapered free-electron lasers

    NASA Astrophysics Data System (ADS)

    Mak, Alan; Curbis, Francesca; Werin, Sverker

    2015-04-01

    The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).

  9. OPTIMIZATION BIAS IN ENERGY-BASED STRUCTURE PREDICTION

    PubMed Central

    Petrella, Robert J.

    2014-01-01

    Physics-based computational approaches to predicting the structure of macromolecules such as proteins are gaining increased use, but there are remaining challenges. In the current work, it is demonstrated that in energy-based prediction methods, the degree of optimization of the sampled structures can influence the prediction results. In particular, discrepancies in the degree of local sampling can bias the predictions in favor of the oversampled structures by shifting the local probability distributions of the minimum sampled energies. In simple systems, it is shown that the magnitude of the errors can be calculated from the energy surface, and for certain model systems, derived analytically. Further, it is shown that for energy wells whose forms differ only by a randomly assigned energy shift, the optimal accuracy of prediction is achieved when the sampling around each structure is equal. Energy correction terms can be used in cases of unequal sampling to reproduce the total probabilities that would occur under equal sampling, but optimal corrections only partially restore the prediction accuracy lost to unequal sampling. For multiwell systems, the determination of the correction terms is a multibody problem; it is shown that the involved cross-correlation multiple integrals can be reduced to simpler integrals. The possible implications of the current analysis for macromolecular structure prediction are discussed. PMID:25552783

  10. Optimal alignment of mirror based pentaprisms for scanning deflectometric devices

    SciTech Connect

    Barber, Samuel K.; Geckeler, Ralf D.; Yashchuk, Valeriy V.; Gubarev, Mikhail V.; Buchheim, Jana; Siewert, Frank; Zeschke, Thomas

    2011-03-04

    In the recent work [Proc. of SPIE 7801, 7801-2/1-12 (2010), Opt. Eng. 50(5) (2011), in press], we have reported on improvement of the Developmental Long Trace Profiler (DLTP), a slope measuring profiler available at the Advanced Light Source Optical Metrology Laboratory, achieved by replacing the bulk pentaprism with a mirror based pentaprism (MBPP). An original experimental procedure for optimal mutual alignment of the MBPP mirrors has been suggested and verified with numerical ray tracing simulations. It has been experimentally shown that the optimally aligned MBPP allows the elimination of systematic errors introduced by inhomogeneity of the optical material and fabrication imperfections of the bulk pentaprism. In the present article, we provide the analytical derivation and verification of easily executed optimal alignment algorithms for two different designs of mirror based pentaprisms. We also provide an analytical description for the mechanism for reduction of the systematic errors introduced by a typical high quality bulk pentaprism. It is also shown that residual misalignments of an MBPP introduce entirely negligible systematic errors in surface slope measurements with scanning deflectometric devices.

  11. Optimizing legacy molecular dynamics software with directive-based offload

    SciTech Connect

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.

  12. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGESBeta

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  13. Interdependency between Risk Assessments for Self and Other in the Field of Comparative Optimism: The Contribution of Response Times

    ERIC Educational Resources Information Center

    Spitzenstetter, Florence; Schimchowitsch, Sarah

    2012-01-01

    By introducing a response-time measure in the field of comparative optimism, this study was designed to explore how people estimate risk to self and others depending on the evaluation order (self/other or other/self). Our results show the interdependency between self and other answers. Indeed, while response time for risk assessment for the self…

  14. Neural network based optimal control of HVAC&R systems

    NASA Astrophysics Data System (ADS)

    Ning, Min

    Heating, Ventilation, Air-Conditioning and Refrigeration (HVAC&R) systems have wide applications in providing a desired indoor environment for different types of buildings. It is well acknowledged that 30%-40% of the total energy generated is consumed by buildings and HVAC&R systems alone account for more than 50% of the building energy consumption. Low operational efficiency especially under partial load conditions and poor control are part of reasons for such high energy consumption. To improve energy efficiency, HVAC&R systems should be properly operated to maintain a comfortable and healthy indoor environment under dynamic ambient and indoor conditions with the least energy consumption. This research focuses on the optimal operation of HVAC&R systems. The optimization problem is formulated and solved to find the optimal set points for the chilled water supply temperature, discharge air temperature and AHU (air handling unit) fan static pressure such that the indoor environment is maintained with the least chiller and fan energy consumption. To achieve this objective, a dynamic system model is developed first to simulate the system behavior under different control schemes and operating conditions. The system model is modular in structure, which includes a water-cooled vapor compression chiller model and a two-zone VAV system model. A fuzzy-set based extended transformation approach is then applied to investigate the uncertainties of this model caused by uncertain parameters and the sensitivities of the control inputs with respect to the interested model outputs. A multi-layer feed forward neural network is constructed and trained in unsupervised mode to minimize the cost function which is comprised of overall energy cost and penalty cost when one or more constraints are violated. After training, the network is implemented as a supervisory controller to compute the optimal settings for the system. In order to implement the optimal set points predicted by the

  15. R2-Based Multi/Many-Objective Particle Swarm Optimization

    PubMed Central

    Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    We propose to couple the R2 performance measure and Particle Swarm Optimization in order to handle multi/many-objective problems. Our proposal shows that through a well-designed interaction process we could maintain the metaheuristic almost inalterable and through the R2 performance measure we did not use neither an external archive nor Pareto dominance to guide the search. The proposed approach is validated using several test problems and performance measures commonly adopted in the specialized literature. Results indicate that the proposed algorithm produces results that are competitive with respect to those obtained by four well-known MOEAs. Additionally, we validate our proposal in many-objective optimization problems. In these problems, our approach showed its main strength, since it could outperform another well-known indicator-based MOEA.

  16. Optimizing Locomotion Controllers Using Biologically-Based Actuators and Objectives

    PubMed Central

    Wang, Jack M.; Hamner, Samuel R.; Delp, Scott L.; Koltun, Vladlen

    2015-01-01

    We present a technique for automatically synthesizing walking and running controllers for physically-simulated 3D humanoid characters. The sagittal hip, knee, and ankle degrees-of-freedom are actuated using a set of eight Hill-type musculotendon models in each leg, with biologically-motivated control laws. The parameters of these control laws are set by an optimization procedure that satisfies a number of locomotion task terms while minimizing a biological model of metabolic energy expenditure. We show that the use of biologically-based actuators and objectives measurably increases the realism of gaits generated by locomotion controllers that operate without the use of motion capture data, and that metabolic energy expenditure provides a simple and unifying measurement of effort that can be used for both walking and running control optimization. PMID:26251560

  17. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  18. Optimization of Surface Acoustic Wave-Based Rate Sensors

    PubMed Central

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO3, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  19. R2-Based Multi/Many-Objective Particle Swarm Optimization

    PubMed Central

    Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    We propose to couple the R2 performance measure and Particle Swarm Optimization in order to handle multi/many-objective problems. Our proposal shows that through a well-designed interaction process we could maintain the metaheuristic almost inalterable and through the R2 performance measure we did not use neither an external archive nor Pareto dominance to guide the search. The proposed approach is validated using several test problems and performance measures commonly adopted in the specialized literature. Results indicate that the proposed algorithm produces results that are competitive with respect to those obtained by four well-known MOEAs. Additionally, we validate our proposal in many-objective optimization problems. In these problems, our approach showed its main strength, since it could outperform another well-known indicator-based MOEA. PMID:27656200

  20. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  1. Air Quality Monitoring: Risk-Based Choices

    NASA Technical Reports Server (NTRS)

    James, John T.

    2009-01-01

    Air monitoring is secondary to rigid control of risks to air quality. Air quality monitoring requires us to target the credible residual risks. Constraints on monitoring devices are severe. Must transition from archival to real-time, on-board monitoring. Must provide data to crew in a way that they can interpret findings. Dust management and monitoring may be a major concern for exploration class missions.

  2. Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.

    PubMed

    Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing

    2015-03-01

    The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%.

  3. The biopharmaceutics risk assessment roadmap for optimizing clinical drug product performance.

    PubMed

    Selen, Arzu; Dickinson, Paul A; Müllertz, Anette; Crison, John R; Mistry, Hitesh B; Cruañes, Maria T; Martinez, Marilyn N; Lennernäs, Hans; Wigal, Tim L; Swinney, David C; Polli, James E; Serajuddin, Abu T M; Cook, Jack A; Dressman, Jennifer B

    2014-11-01

    The biopharmaceutics risk assessment roadmap (BioRAM) optimizes drug product development and performance by using therapy-driven target drug delivery profiles as a framework to achieve the desired therapeutic outcome. Hence, clinical relevance is directly built into early formulation development. Biopharmaceutics tools are used to identify and address potential challenges to optimize the drug product for patient benefit. For illustration, BioRAM is applied to four relatively common therapy-driven drug delivery scenarios: rapid therapeutic onset, multiphasic delivery, delayed therapeutic onset, and maintenance of target exposure. BioRAM considers the therapeutic target with the drug substance characteristics and enables collection of critical knowledge for development of a dosage form that can perform consistently for meeting the patient's needs. Accordingly, the key factors are identified and in vitro, in vivo, and in silico modeling and simulation techniques are used to elucidate the optimal drug delivery rate and pattern. BioRAM enables (1) feasibility assessment for the dosage form, (2) development and conduct of appropriate "learning and confirming" studies, (3) transparency in decision-making, (4) assurance of drug product quality during lifecycle management, and (5) development of robust linkages between the desired clinical outcome and the necessary product quality attributes for inclusion in the quality target product profile.

  4. Cost-effectiveness and harm-benefit analyses of risk-based screening strategies for breast cancer.

    PubMed

    Vilaprinyo, Ester; Forné, Carles; Carles, Misericordia; Sala, Maria; Pla, Roger; Castells, Xavier; Domingo, Laia; Rue, Montserrat

    2014-01-01

    The one-size-fits-all paradigm in organized screening of breast cancer is shifting towards a personalized approach. The present study has two objectives: 1) To perform an economic evaluation and to assess the harm-benefit ratios of screening strategies that vary in their intensity and interval ages based on breast cancer risk; and 2) To estimate the gain in terms of cost and harm reductions using risk-based screening with respect to the usual practice. We used a probabilistic model and input data from Spanish population registries and screening programs, as well as from clinical studies, to estimate the benefit, harm, and costs over time of 2,624 screening strategies, uniform or risk-based. We defined four risk groups, low, moderate-low, moderate-high and high, based on breast density, family history of breast cancer and personal history of breast biopsy. The risk-based strategies were obtained combining the exam periodicity (annual, biennial, triennial and quinquennial), the starting ages (40, 45 and 50 years) and the ending ages (69 and 74 years) in the four risk groups. Incremental cost-effectiveness and harm-benefit ratios were used to select the optimal strategies. Compared to risk-based strategies, the uniform ones result in a much lower benefit for a specific cost. Reductions close to 10% in costs and higher than 20% in false-positive results and overdiagnosed cases were obtained for risk-based strategies. Optimal screening is characterized by quinquennial or triennial periodicities for the low or moderate risk-groups and annual periodicity for the high-risk group. Risk-based strategies can reduce harm and costs. It is necessary to develop accurate measures of individual risk and to work on how to implement risk-based screening strategies.

  5. Cost-Effectiveness and Harm-Benefit Analyses of Risk-Based Screening Strategies for Breast Cancer

    PubMed Central

    Carles, Misericordia; Sala, Maria; Pla, Roger; Castells, Xavier; Domingo, Laia; Rue, Montserrat

    2014-01-01

    The one-size-fits-all paradigm in organized screening of breast cancer is shifting towards a personalized approach. The present study has two objectives: 1) To perform an economic evaluation and to assess the harm-benefit ratios of screening strategies that vary in their intensity and interval ages based on breast cancer risk; and 2) To estimate the gain in terms of cost and harm reductions using risk-based screening with respect to the usual practice. We used a probabilistic model and input data from Spanish population registries and screening programs, as well as from clinical studies, to estimate the benefit, harm, and costs over time of 2,624 screening strategies, uniform or risk-based. We defined four risk groups, low, moderate-low, moderate-high and high, based on breast density, family history of breast cancer and personal history of breast biopsy. The risk-based strategies were obtained combining the exam periodicity (annual, biennial, triennial and quinquennial), the starting ages (40, 45 and 50 years) and the ending ages (69 and 74 years) in the four risk groups. Incremental cost-effectiveness and harm-benefit ratios were used to select the optimal strategies. Compared to risk-based strategies, the uniform ones result in a much lower benefit for a specific cost. Reductions close to 10% in costs and higher than 20% in false-positive results and overdiagnosed cases were obtained for risk-based strategies. Optimal screening is characterized by quinquennial or triennial periodicities for the low or moderate risk-groups and annual periodicity for the high-risk group. Risk-based strategies can reduce harm and costs. It is necessary to develop accurate measures of individual risk and to work on how to implement risk-based screening strategies. PMID:24498285

  6. Model based optimization of wind erosion control by tree shelterbelt for suitable land management

    NASA Astrophysics Data System (ADS)

    Bartus, M.; Farsang, A.; Szatmári, J.; Barta, K.

    2012-04-01

    The degradation of soil by wind erosion causes huge problem in many parts of the world. The wind erodes the upper, nutrition rich part of the soil, therefore erosion causes soil productivity loss. The length of tree shelterbelts was significantly reduced by the collectivisation (1960-1989) and the wind erosion affected areas expanded in Hungary. The tree shelterbelt is more than just a tool of wind erosion control; by good planning it can increase the yield. The tree shelterbelt reduces the wind speed and changes the microclimate providing better condition to plant growth. The aim of our work is to estimate wind erosion risk and to find the way to reduce it by tree shelterbelts. A GIS based model was created to calculate the risk and the optimal windbreak position was defined to reduce the wind erosion risk to the minimum. The model is based on the DIN 19706 (Ermitlung der Erosiongefährdung von Böden durch Wind, Estimation of Wind Erosion Risk) German standard. The model uses five input data: structure and carbon content of soil, average yearly wind speed at 10 meters height, the cultivated plants and the height and position of windbreak. The study field (16km2) was chosen near Szeged (SE Hungary). In our investigation, the cultivated plant species and the position and height of windbreaks were modified. Different scenarios were made using the data of the land management in the last few years. The best case scenario (zero wind erosion) and the worst case scenario (with no tree shelter belt and the worst land use) were made to find the optimal windbreak position. Finally, the research proved that the tree shelterbelts can provide proper protection against wind erosion, but for optimal land management the cultivated plant types should also controlled. As a result of the research, a land management plan was defined to reduce the wind erosion risk on the study field, which contains the positions of new tree shelterbelts planting and the optimal cultivation.

  7. Optimal Cutoff Points of Anthropometric Parameters to Identify High Coronary Heart Disease Risk in Korean Adults

    PubMed Central

    2016-01-01

    Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m2 and 23.3 kg/m2 for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults. PMID:26770039

  8. Optimal Cutoff Points of Anthropometric Parameters to Identify High Coronary Heart Disease Risk in Korean Adults.

    PubMed

    Kim, Sang Hyuck; Choi, Hyunrim; Won, Chang Won; Kim, Byung-Sung

    2016-01-01

    Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m(2) and 23.3 kg/m(2) for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults.

  9. Behavior-Based Safety and Occupational Risk Management

    ERIC Educational Resources Information Center

    Geller, E. Scott

    2005-01-01

    The behavior-based approach to managing occupational risk and preventing workplace injuries is reviewed. Unlike the typical top-down control approach to industrial safety, behavior-based safety (BBS) provides tools and procedures workers can use to take personal control of occupational risks. Strategies the author and his colleagues have been…

  10. Risk-based versus deterministic explosives safety criteria

    SciTech Connect

    Wright, R.E.

    1996-12-01

    The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.

  11. Weather forecast-based optimization of integrated energy systems.

    SciTech Connect

    Zavala, V. M.; Constantinescu, E. M.; Krause, T.; Anitescu, M.

    2009-03-01

    In this work, we establish an on-line optimization framework to exploit detailed weather forecast information in the operation of integrated energy systems, such as buildings and photovoltaic/wind hybrid systems. We first discuss how the use of traditional reactive operation strategies that neglect the future evolution of the ambient conditions can translate in high operating costs. To overcome this problem, we propose the use of a supervisory dynamic optimization strategy that can lead to more proactive and cost-effective operations. The strategy is based on the solution of a receding-horizon stochastic dynamic optimization problem. This permits the direct incorporation of economic objectives, statistical forecast information, and operational constraints. To obtain the weather forecast information, we employ a state-of-the-art forecasting model initialized with real meteorological data. The statistical ambient information is obtained from a set of realizations generated by the weather model executed in an operational setting. We present proof-of-concept simulation studies to demonstrate that the proposed framework can lead to significant savings (more than 18% reduction) in operating costs.

  12. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGESBeta

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-02-24

    Here, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. Also, this volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problemmore » in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  13. Optimization-based mesh correction with volume and convexity constraints

    NASA Astrophysics Data System (ADS)

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-05-01

    We consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.

  14. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  15. Corner Sort for Pareto-Based Many-Objective Optimization.

    PubMed

    Wang, Handing; Yao, Xin

    2014-01-01

    Nondominated sorting plays an important role in Pareto-based multiobjective evolutionary algorithms (MOEAs). When faced with many-objective optimization problems multiobjective optimization problems (MOPs) with more than three objectives, the number of comparisons needed in nondominated sorting becomes very large. In view of this, a new corner sort is proposed in this paper. Corner sort first adopts a fast and simple method to obtain a nondominated solution from the corner solutions, and then uses the nondominated solution to ignore the solutions dominated by it to save comparisons. Obtaining the nondominated solutions requires much fewer objective comparisons in corner sort. In order to evaluate its performance, several state-of-the-art nondominated sorts are compared with our corner sort on three kinds of artificial solution sets of MOPs and the solution sets generated from MOEAs on benchmark problems. On one hand, the experiments on artificial solution sets show the performance on the solution sets with different distributions. On the other hand, the experiments on the solution sets generated from MOEAs show the influence that different sorts bring to MOEAs. The results show that corner sort performs well, especially on many-objective optimization problems. Corner sort uses fewer comparisons than others.

  16. PSCL: predicting protein subcellular localization based on optimal functional domains.

    PubMed

    Wang, Kai; Hu, Le-Le; Shi, Xiao-He; Dong, Ying-Song; Li, Hai-Peng; Wen, Tie-Qiao

    2012-01-01

    It is well known that protein subcellular localizations are closely related to their functions. Although many computational methods and tools are available from Internet, it is still necessary to develop new algorithms in this filed to gain a better understanding of the complex mechanism of plant subcellular localization. Here, we provide a new web server named PSCL for plant protein subcellular localization prediction by employing optimized functional domains. After feature optimization, 848 optimal functional domains from InterPro were obtained to represent each protein. By calculating the distances to each of the seven categories, PSCL showing the possibilities of a protein located into each of those categories in ascending order. Toward our dataset, PSCL achieved a first-order predicted accuracy of 75.7% by jackknife test. Gene Ontology enrichment analysis showing that catalytic activity, cellular process and metabolic process are strongly correlated with the localization of plant proteins. Finally, PSCL, a Linux Operate System based web interface for the predictor was designed and is accessible for public use at http://pscl.biosino.org/.

  17. Risk-based decisionmaking in the DOE: Challenges and status

    SciTech Connect

    Henry, C.J.; Alchowiak, J.; Moses, M.

    1995-12-31

    The primary mission of the Environmental Management Program is to protect human health and the environment, the first goal of which must be, to address urgent risks and threats. Another is to provide for a safe workplace. Without credible risk assessments and good risk management practices, the central environmental goals cannot be met. Principles for risk analysis which include principles for risk assessment, management, communication, and priority setting were adopted. As recommended, Environmental Management is using risk-based decision making in its budget process and in the implementation of its program. The challenges presented in using a risk-based Decision making process are to integrate risk assessment methods and cultural and social values so as to produce meaningful priorities. The different laws and regulations governing the Department define risk differently in implementing activities to protect human health and the environment, therefore, assumptions and judgements in risk analysis vary. Currently, the Environmental Management Program is developing and improving a framework to incorporate risk into the budget process and to link the budget, compliance requirements and risk reduction/pollution prevention activities.

  18. Developing points-based risk-scoring systems in the presence of competing risks.

    PubMed

    Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P

    2016-09-30

    Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27197622

  19. On optimizing distance-based similarity search for biological databases.

    PubMed

    Mao, Rui; Xu, Weijia; Ramakrishnan, Smriti; Nuckolls, Glen; Miranker, Daniel P

    2005-01-01

    Similarity search leveraging distance-based index structures is increasingly being used for both multimedia and biological database applications. We consider distance-based indexing for three important biological data types, protein k-mers with the metric PAM model, DNA k-mers with Hamming distance and peptide fragmentation spectra with a pseudo-metric derived from cosine distance. To date, the primary driver of this research has been multimedia applications, where similarity functions are often Euclidean norms on high dimensional feature vectors. We develop results showing that the character of these biological workloads is different from multimedia workloads. In particular, they are not intrinsically very high dimensional, and deserving different optimization heuristics. Based on MVP-trees, we develop a pivot selection heuristic seeking centers and show it outperforms the most widely used corner seeking heuristic. Similarly, we develop a data partitioning approach sensitive to the actual data distribution in lieu of median splits. PMID:16447992

  20. Parallel performance optimizations on unstructured mesh-based simulations

    SciTech Connect

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  1. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    NASA Astrophysics Data System (ADS)

    Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.

    2008-03-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.

  2. Traffic optimization in transport networks based on local routing

    NASA Astrophysics Data System (ADS)

    Scellato, S.; Fortuna, L.; Frasca, M.; Gómez-Gardeñes, J.; Latora, V.

    2010-01-01

    Congestion in transport networks is a topic of theoretical interest and practical importance. In this paper we study the flow of vehicles in urban street networks. In particular, we use a cellular automata model on a complex network to simulate the motion of vehicles along streets, coupled with a congestion-aware routing at street crossings. Such routing makes use of the knowledge of agents about traffic in nearby roads and allows the vehicles to dynamically update the routes towards their destinations. By implementing the model in real urban street patterns of various cities, we show that it is possible to achieve a global traffic optimization based on local agent decisions.

  3. An internet graph model based on trade-off optimization

    NASA Astrophysics Data System (ADS)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  4. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  5. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  6. Optimal Constellation Design for Satellite Based Augmentation System

    NASA Astrophysics Data System (ADS)

    Kawano, Isao

    Global Positioning System (GPS) is widely utilized in daily life, for instance car navigation. Wide Area Augmentation System (WAAS) and Local Area Augmentation System (LAAS) are proposed so as to provide GPS better navigation accuracy and integrity capability. Satellite Based Augmentation System (SBAS) is a kind of WAAS and Multi-functional Transportation Satellite (MTSAT) has been developed in Japan. To improve navigation accuracy most efficiently, augmentation satellites should be so placed that minimize Geometric Dilution of Precision (GDOP) of constellation. In this paper the result of optimal constellation design for SBAS is shown.

  7. Adjoint-based optimization for understanding and suppressing jet noise

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan B.

    2011-08-01

    Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. This paper, which follows from a presentation at the 2010 IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction, reviews recent and ongoing efforts by the author and co-workers. It provides a new formulation of the basic approach and demonstrates the approach on a series of model flows, culminating with a preliminary result for a turbulent jet.

  8. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  9. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  10. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization.

    PubMed

    Zou, Feng; Chen, Debao; Wang, Jiangtao

    2016-01-01

    An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157

  11. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization

    PubMed Central

    LEE, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708

  12. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  13. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization

    PubMed Central

    Zou, Feng; Chen, Debao; Wang, Jiangtao

    2016-01-01

    An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157

  14. Optimal scheduling of multispacecraft refueling based on cooperative maneuver

    NASA Astrophysics Data System (ADS)

    Du, Bingxiao; Zhao, Yong; Dutta, Atri; Yu, Jing; Chen, Xiaoqian

    2015-06-01

    The scheduling of multispacecraft refueling based on cooperative maneuver in a circular orbit is studied in this paper. In the proposed scheme, both of the single service vehicle (SSV) and the target satellite (TS) perform the orbital transfer to complete the rendezvous at the service places. When a TS is refueled by the SSV, it returns to its original working slot to continue its normal function. In this way, the SSV refuels the TS one by one. A MINLP model for the mission is first built, then a two-level hybrid optimization approach is proposed for determining the strategy, and the optimal solution is successfully obtained by using an algorithm which is a combination of Multi-island Genetic Algorithm and Sequential Quadratic Programming. Results show the cooperative strategy can save around 27.31% in fuel, compared with the non-cooperative strategy in which only the SSV would maneuver in the example considered. Three conclusions can be drawn based on the numerical simulations for the evenly distributed constellations. Firstly, in the cooperative strategy one of the service positions is the initial location of the SSV, other service positions are also target slots, i.e. not all targets need to maneuver, and there may be more than one TS serviced in a given service position. Secondly, the efficiency gains for the cooperative strategy are higher for larger transferred fuel mass. Thirdly, the cooperative strategy is less efficient for targets with larger spacecraft mass.

  15. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  16. Optimization-based multiple-point geostatistics: A sparse way

    NASA Astrophysics Data System (ADS)

    Kalantari, Sadegh; Abdollahifard, Mohammad Javad

    2016-10-01

    In multiple-point simulation the image should be synthesized consistent with the given training image and hard conditioning data. Existing sequential simulation methods usually lead to error accumulation which is hardly manageable in future steps. Optimization-based methods are capable of handling inconsistencies by iteratively refining the simulation grid. In this paper, the multiple-point stochastic simulation problem is formulated in an optimization-based framework using a sparse model. Sparse model allows each patch to be constructed as a superposition of a few atoms of a dictionary formed using training patterns, leading to a significant increase in the variability of the patches. To control the creativity of the model, a local histogram matching method is proposed. Furthermore, effective solutions are proposed for different issues arisen in multiple-point simulation. In order to handle hard conditioning data a weighted matching pursuit method is developed in this paper. Moreover, a simple and efficient thresholding method is developed which allows working with categorical variables. The experiments show that the proposed method produces acceptable realizations in terms of pattern reproduction, increases the variability of the realizations, and properly handles numerous conditioning data.

  17. Research on Taxiway Path Optimization Based on Conflict Detection

    PubMed Central

    Zhou, Hang; Jiang, Xinxin

    2015-01-01

    Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485

  18. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  19. [Study on the land use optimization based on PPI].

    PubMed

    Wu, Xiao-Feng; Li, Ting

    2012-03-01

    Land use type and managing method which is greatly influenced by human activities, is one of the most important factors of non-point pollution. Based on the collection and analysis of non-point pollution control methods and the concept of the three ecological fronts, 9 land use optimized scenarios were designed according to rationality analysis of the current land use situation in the 3 typed small watersheds in Miyun reservoir basin. Take Caojialu watershed for example to analyze and compare the influence to environment of different scenarios based on potential pollution index (PPI) and river section potential pollution index (R-PPI) and the best combination scenario was found. Land use scenario designing and comparison on basis of PPI and R-PPI could help to find the best combination scenario of land use type and managing method, to optimize space distribution and managing methods of land use in basin, to reduce soil erosion and to provide powerful support to formulation of land use planning and pollution control project.

  20. Research on Taxiway Path Optimization Based on Conflict Detection.

    PubMed

    Zhou, Hang; Jiang, Xinxin

    2015-01-01

    Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency.

  1. Utility-based optimization of phase II/III programs.

    PubMed

    Kirchner, Marietta; Kieser, Meinhard; Götte, Heiko; Schüler, Armin

    2016-01-30

    Phase II and phase III trials play a crucial role in drug development programs. They are costly and time consuming and, because of high failure rates in late development stages, at the same time risky investments. Commonly, sample size calculation of phase III is based on the treatment effect observed in phase II. Therefore, planning of phases II and III can be linked. The performance of the phase II/III program crucially depends on the allocation of the resources to phases II and III by appropriate choice of the sample size and the rule applied to decide whether to stop the program after phase II or to proceed. We present methods for a program-wise phase II/III planning that aim at determining optimal phase II sample sizes and go/no-go decisions in a time-to-event setting. Optimization is based on a utility function that takes into account (fixed and variable) costs of the drug development program and potential gains after successful launch. The proposed methods are illustrated by application to a variety of scenarios typically met in oncology drug development.

  2. Research on Taxiway Path Optimization Based on Conflict Detection.

    PubMed

    Zhou, Hang; Jiang, Xinxin

    2015-01-01

    Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485

  3. Discrete-Time ARMAv Model-Based Optimal Sensor Placement

    SciTech Connect

    Song Wei; Dyke, Shirley J.

    2008-07-08

    This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.

  4. Design of ultra-compact triplexer with function-expansion based topology optimization.

    PubMed

    Zhang, Zejun; Tsuji, Yasuhide; Yasui, Takashi; Hirayama, Koichi

    2015-02-23

    In this paper, in order to optimize wavelength selective photonic devices using the function-expansion-based topology optimization method, several expansion functions are considered and the influence on the optimized structure based on each expansion function was investigated. Although the Fourier series is conventionally used in the function-expansion-based method, the optimized structure sometimes has a complicated refractive index distribution. Therefore, we employed a sampling function and a pyramid function to obtain a simpler structure through the optimal design. A triplexer was designed by using our method, and the comparison between the optimized structures based on the three expansion functions was also discussed in detail. PMID:25836433

  5. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  6. CFD-Based Design Optimization Tool Developed for Subsonic Inlet

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The traditional approach to the design of engine inlets for commercial transport aircraft is a tedious process that ends with a less-than-optimum design. With the advent of high-speed computers and the availability of more accurate and reliable computational fluid dynamics (CFD) solvers, numerical optimization processes can effectively be used to design an aerodynamic inlet lip that enhances engine performance. The designers' experience at Boeing Corporation showed that for a peak Mach number on the inlet surface beyond some upper limit, the performance of the engine degrades excessively. Thus, our objective was to optimize efficiency (minimize the peak Mach number) at maximum cruise without compromising performance at other operating conditions. Using a CFD code NPARC, the NASA Lewis Research Center, in collaboration with Boeing, developed an integrated procedure at Lewis to find the optimum shape of a subsonic inlet lip and a numerical optimization code, ADS. We used a GRAPE-based three-dimensional grid generator to help automate the optimization procedure. The inlet lip shape at the crown and the keel was described as a superellipse, and the superellipse exponents and radii ratios were considered as design variables. Three operating conditions: cruise, takeoff, and rolling takeoff, were considered in this study. Three-dimensional Euler computations were carried out to obtain the flow field. At the initial design, the peak Mach numbers for maximum cruise, takeoff, and rolling takeoff conditions were 0.88, 1.772, and 1.61, respectively. The acceptable upper limits on the takeoff and rolling takeoff Mach numbers were 1.55 and 1.45. Since the initial design provided by Boeing was found to be optimum with respect to the maximum cruise condition, the sum of the peak Mach numbers at takeoff and rolling takeoff were minimized in the current study while the maximum cruise Mach number was constrained to be close to that at the existing design. With this objective, the

  7. Risk-based verification, validation, and accreditation process

    NASA Astrophysics Data System (ADS)

    Elele, James N.; Smith, Jeremy

    2010-04-01

    This paper presents a risk-based Verification, Validation, and Accreditation (VV&A) process for Models and Simulations (M&S). Recently, the emphasis on M&S used to support Department of Defense (DoD) acquisition has been based on the level of resources allocated to establishing the credibility of the M&S on the risks associated with the decision being supported by the M&S. In addition, DoD VV&A regulations recommend tailoring the V&V process to allow efficient use of resources. However, one problem is that no methodology is specified for such tailoring processes. The BMV&V has developed a risk-based process that implements tailoring of the VV&A activities based on risk. Our process incorporates MIL-STD 3022 for new M&S. For legacy M&S, the process starts by first assessing the current risk level of the M&S based on the credibility attributes of the M&S as defined through its Capability, Accuracy and Usability, relative to the articulated Intended Use Statement (IUS). If the risk is low, the M&S is credible for application, and no further V&V is required. If the risk is medium or high, the Accreditation Authority determines whether the M&S can be accepted as-is or if the risk should be mitigated. If the Accreditation Authority is willing to accept the risks, then a Conditional Accreditation is made. If the risks associated with using the M&S as-is are deemed too high to accept, then a Risk Mitigation/Accreditation Plan is developed to guide the process. The implementation of such a risk mitigation plan is finally documented through an Accreditation Support Package.

  8. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization.

    PubMed

    Wang, Wuli; Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  9. Vision-based coaching: optimizing resources for leader development.

    PubMed

    Passarelli, Angela M

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader's development fail to leverage the benefits of the individual's personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader's personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader's identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  10. Vision-based coaching: optimizing resources for leader development

    PubMed Central

    Passarelli, Angela M.

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  11. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  12. Vision-based coaching: optimizing resources for leader development.

    PubMed

    Passarelli, Angela M

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader's development fail to leverage the benefits of the individual's personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader's personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader's identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  13. Optimizing bulk milk dioxin monitoring based on costs and effectiveness.

    PubMed

    Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M

    2013-07-01

    concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities.

  14. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  15. Risk-based zoning for urbanizing floodplains.

    PubMed

    Porse, Erik

    2014-01-01

    Urban floodplain development brings economic benefits and enhanced flood risks. Rapidly growing cities must often balance the economic benefits and increased risks of floodplain settlement. Planning can provide multiple flood mitigation and environmental benefits by combining traditional structural measures such as levees, increasingly popular landscape and design features (green infrastructure), and non-structural measures such as zoning. Flexibility in both structural and non-structural options, including zoning procedures, can reduce flood risks. This paper presents a linear programming formulation to assess cost-effective urban floodplain development decisions that consider benefits and costs of development along with expected flood damages. It uses a probabilistic approach to identify combinations of land-use allocations (residential and commercial development, flood channels, distributed runoff management) and zoning regulations (development zones in channel) to maximize benefits. The model is applied to a floodplain planning analysis for an urbanizing region in the Baja Sur peninsula of Mexico. The analysis demonstrates how (1) economic benefits drive floodplain development, (2) flexible zoning can improve economic returns, and (3) cities can use landscapes, enhanced by technology and design, to manage floods. The framework can incorporate additional green infrastructure benefits, and bridges typical disciplinary gaps for planning and engineering.

  16. Risk-based zoning for urbanizing floodplains.

    PubMed

    Porse, Erik

    2014-01-01

    Urban floodplain development brings economic benefits and enhanced flood risks. Rapidly growing cities must often balance the economic benefits and increased risks of floodplain settlement. Planning can provide multiple flood mitigation and environmental benefits by combining traditional structural measures such as levees, increasingly popular landscape and design features (green infrastructure), and non-structural measures such as zoning. Flexibility in both structural and non-structural options, including zoning procedures, can reduce flood risks. This paper presents a linear programming formulation to assess cost-effective urban floodplain development decisions that consider benefits and costs of development along with expected flood damages. It uses a probabilistic approach to identify combinations of land-use allocations (residential and commercial development, flood channels, distributed runoff management) and zoning regulations (development zones in channel) to maximize benefits. The model is applied to a floodplain planning analysis for an urbanizing region in the Baja Sur peninsula of Mexico. The analysis demonstrates how (1) economic benefits drive floodplain development, (2) flexible zoning can improve economic returns, and (3) cities can use landscapes, enhanced by technology and design, to manage floods. The framework can incorporate additional green infrastructure benefits, and bridges typical disciplinary gaps for planning and engineering. PMID:25500464

  17. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level

  18. Graph-based optimization algorithm and software on kidney exchanges.

    PubMed

    Chen, Yanhua; Li, Yijiang; Kalbfleisch, John D; Zhou, Yan; Leichtman, Alan; Song, Peter X-K

    2012-07-01

    Kidney transplantation is typically the most effective treatment for patients with end-stage renal disease. However, the supply of kidneys is far short of the fast-growing demand. Kidney paired donation (KPD) programs provide an innovative approach for increasing the number of available kidneys. In a KPD program, willing but incompatible donor-candidate pairs may exchange donor organs to achieve mutual benefit. Recently, research on exchanges initiated by altruistic donors (ADs) has attracted great attention because the resultant organ exchange mechanisms offer advantages that increase the effectiveness of KPD programs. Currently, most KPD programs focus on rule-based strategies of prioritizing kidney donation. In this paper, we consider and compare two graph-based organ allocation algorithms to optimize an outcome-based strategy defined by the overall expected utility of kidney exchanges in a KPD program with both incompatible pairs and ADs. We develop an interactive software-based decision support system to model, monitor, and visualize a conceptual KPD program, which aims to assist clinicians in the evaluation of different kidney allocation strategies. Using this system, we demonstrate empirically that an outcome-based strategy for kidney exchanges leads to improvement in both the quantity and quality of kidney transplantation through comprehensive simulation experiments. PMID:22542649

  19. 12 CFR Appendix C to Part 704 - Risk-Based Capital Credit Risk-Weight Categories

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Commission (SEC) and that complies with the SEC's net capital regulations (17 CFR 240.15c3(1)); and (2) A... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Risk-Based Capital Credit Risk-Weight Categories C Appendix C to Part 704 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION...

  20. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement

    PubMed Central

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  1. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement.

    PubMed

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  2. Biomechanical optimization of subject-specific implant positioning for femoral head resurfacing to reduce fracture risk.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Appleyard, Richard; Theodore, Willy; Zheng, Keke; Li, Qing; Ruys, Andrew J

    2016-07-01

    Peri-prosthetic femoral neck fracture after femoral head resurfacing can be either patient-related or surgical technique-related. The study aimed to develop a patient-specific finite element modelling technique that can reliably predict an optimal implant position and give minimal strain in the peri-prosthetic bone tissue, thereby reducing the risk of peri-prosthetic femoral neck fracture. The subject-specific finite element modelling was integrated with optimization techniques including design of experiments to best possibly position the implant for achieving minimal strain for femoral head resurfacing. Sample space was defined by varying the floating point to find the extremes at which the cylindrical reaming operation actually cuts into the femoral neck causing a notch during hip resurfacing surgery. The study showed that the location of the maximum strain, for all non-notching positions, was on the superior femoral neck, in the peri-prosthetic bone tissue. It demonstrated that varus positioning resulted in a higher strain, while valgus positioning reduced the strain, and further that neutral version had a lower strain. PMID:27098752

  3. Balancing Cost and Risk by Optimizing the High-Level Waste and Low-Activity Waste Vitrification

    SciTech Connect

    Hrma, Pavel R.; Vienna, John D.

    2000-02-23

    In the currently used melters, the waste loading for nearly all high-level waste (HLW) is limited by crystallization. Above a certain level of waste loading, precipitation, settling, and accumulation of crystalline phases can cause severe processing problems and shorten the melter lifetime. To decrease the cost without putting the vitrification process at an unreasonable risk, several options, such as developing melters that operate above the liquidus temperature of glass, can be considered. Alternatively, if the melter is stirred, either mechanically, by bubbling, or by temperature gradients in induction heating, the melt can contain a substantial fraction of a crystalline phase that would not settle because it would be removed from the melter with glass. In addition, an induction melter can be nearly completely drained. For current melters that operate at a fixed temperature of 1150C, optimized glass formulation within currently accepted constaints has been developed. This approach is based on mathematically formulated relationships between glass properties and glass composition. Finally, re-evaluating the liquidus-temperature constraint, which may be unnecessarily restrictive for some HLWs, has recently been investigated. An attempt is being made to assess the rate of settling of crystalline phases in the melter and evaluate the risk for melter operation. Based on a reliable estimate of such a risk, waste loading could be increased, and a substantial saving can accrue. For low-activity waste (LAW), the waste loading in glass is limited either by the product quality or by segregation of sulfate during melting. The formulation of constraints on LAW glass in terms of relevant properties has not been completed, and no property-composition relationships have been established so far for this type of waste glass.

  4. Experience with the implementation of a risk-based ISI program and inspection qualification

    SciTech Connect

    Chapman, O.J.V.

    1996-12-01

    Rolls Royce and Associates (RRA) are the Design Authority (DA) for Nuclear Steam Raising Plant (NSRP) used for the Royal Naval Nuclear Fleet. Over the past seven years RRA, with support from the Ministry of Defense, has developed and implemented a risk based in-service inspection (RBISI) strategy for the NSRP. Having used risk as a means of optimizing where to inspect, an inspection qualification (IQ) process has now been put in place to ensure that proposed inspections deliver the expected gains assumed. This qualification process follows very closely that currently being put forward by the European Network on Inspection Qualification (ENIQ).

  5. Risk-based objectives for the allocation of chemical, biological, and radiological air emissions sensors.

    PubMed

    Lambert, James H; Farrington, Mark W

    2006-12-01

    This article addresses the problem of allocating devices for localized hazard protection across a region. Each identical device provides only local protection, and the devices serve localities that are exposed to nonidentical intensities of hazard. A method for seeking the optimal allocation Policy Decisions is described, highlighting the potentially competing objectives of maximizing local risk reductions and coverage risk reductions. The metric for local risk reductions is the sum of the local economic risks avoided. The metric for coverage risk reductions is adapted from the p-median problem and equal to the sum of squares of the distances from all unserved localities to their closest associated served locality. Three graphical techniques for interpreting the Policy Decisions are presented. The three linked graphical techniques are applied serially. The first technique identifies Policy Decisions that are nearly Pareto optimal. The second identifies locations where sensor placements are most justified, based on a risk-cost-benefit analysis under uncertainty. The third displays the decision space for any particular policy decision. The method is illustrated in an application to chemical, biological, and/or radiological weapon sensor placement, but has implications for disaster preparedness, transportation safety, and other arenas of public safety.

  6. Bioassay-based risk assessment of complex mixtures

    SciTech Connect

    Donnelly, K.C.; Huebner, H.J.

    1996-12-31

    The baseline risk assessment often plays an integral role in various decision-making processes at Superfund sites. The present study reports on risk characterizations prepared for seven complex mixtures using biological and chemical analysis. Three of the samples (A, B, and C) were complex mixtures of polycyclic aromatic hydrocarbons (PAHs) extracted from coal tar; while four samples extracted from munitions-contaminated soil contained primarily nitroaromatic hydrocarbons. The chemical-based risk assessment ranked sample C as least toxic, while the risk associated with samples A and B was approximately equal. The microbial bioassay was in general agreement for the coal tar samples. The weighted activity of the coal tar extracts in Salmonella was 4,960 for sample C, and 162,000 and 206,000 for samples A and B, respectively. The bacterial mutagenicity of 2,4,6-trinitrotoluene contaminated soils exhibited an indirect correlation with chemical-based risk assessment. The aqueous extract of sample 004 induced 1,292 net revertants in Salmonella, while the estimated risk to ingestion and dermal adsorption was 2E-9. The data indicate that the chemical-based risk assessment accurately predicted the genotoxicity of the PAHs, while the accuracy of the risk assessment for munitions contaminated soils was limited due to the presence of metabolites of TNT degradation. The biological tests used in this research provide a valuable compliment to chemical analysis for characterizing the genotoxic risk of complex mixtures.

  7. Optimization of electrochemical aptamer-based sensors via optimization of probe packing density and surface chemistry.

    PubMed

    White, Ryan J; Phares, Noelle; Lubin, Arica A; Xiao, Yi; Plaxco, Kevin W

    2008-09-16

    Electrochemical, aptamer-based (E-AB) sensors, which are comprised of an electrode modified with surface immobilized, redox-tagged DNA aptamers, have emerged as a promising new biosensor platform. In order to further improve this technology we have systematically studied the effects of probe (aptamer) packing density, the AC frequency used to interrogate the sensor, and the nature of the self-assembled monolayer (SAM) used to passivate the electrode on the performance of representative E-AB sensors directed against the small molecule cocaine and the protein thrombin. We find that, by controlling the concentration of aptamer employed during sensor fabrication, we can control the density of probe DNA molecules on the electrode surface over an order of magnitude range. Over this range, the gain of the cocaine sensor varies from 60% to 200%, with maximum gain observed near the lowest probe densities. In contrast, over a similar range, the signal change of the thrombin sensor varies from 16% to 42% and optimal signaling is observed at intermediate densities. Above cut-offs at low hertz frequencies, neither sensor displays any significant dependence on the frequency of the alternating potential employed in their interrogation. Finally, we find that E-AB signal gain is sensitive to the nature of the alkanethiol SAM employed to passivate the interrogating electrode; while thinner SAMs lead to higher absolute sensor currents, reducing the length of the SAM from 6-carbons to 2-carbons reduces the observed signal gain of our cocaine sensor 10-fold. We demonstrate that fabrication and operational parameters can be varied to achieve optimal sensor performance and that these can serve as a basic outline for future sensor fabrication.

  8. Research needs for risk-informed, performance-based regulations

    SciTech Connect

    Thadani, A.C.

    1997-01-01

    This article summarizes the activities of the Office of Research of the NRC, both from a historical aspect as well as it applies to the application of risk-based decision making. The office has been actively involved in problems related to understanding risks related to core accidents, to understanding the problem of aging of reactor components and materials from years of service, and toward the understanding and analysis of severe accidents. In addition new policy statements regarding the role of risk assessment in regulatory applications has given focus for the need of further work. The NRC has used risk assessment in regulatory questions in the past but in a fairly ad hoc sort of manner. The new policies will clearly require a better defined application of risk assessment, and help for people evaluating applications in judging the applicability of such applications when a component of them is based on risk-based decision making. To address this, standard review plans are being prepared to serve as guides for such questions. In addition, with regulatory decisions being allowed to be based upon risk-based decisions, it is necessary to have an adequate data base prepared, and made publically available, to support such a position.

  9. Biological Bases of Space Radiation Risk

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In this session, Session JP4, the discussion focuses on the following topics: Hematopoiesis Dynamics in Irradiated Mammals, Mathematical Modeling; Estimating Health Risks in Space from Galactic Cosmic Rays; Failure of Heavy Ions to Affect Physiological Integrity of the Corneal Endothelial Monolayer; Application of an Unbiased Two-Gel CDNA Library Screening Method to Expression Monitoring of Genes in Irradiated Versus Control Cells; Detection of Radiation-Induced DNA Strand Breaks in Mammalian Cells By Enzymatic Post-Labeling; Evaluation of Bleomycin-Induced Chromosome Aberrations Under Microgravity Conditions in Human Lymphocytes, Using "Fish" Techniques; Technical Description of the Space Exposure Biology Assembly Seba on ISS; and Cytogenetic Research in Biological Dosimetry.

  10. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  11. Source mask optimization study based on latest Nikon immersion scanner

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Wei, Fang; Chen, Lijun; Zhang, Chenming; Zhang, Wei; Nishinaga, Hisashi; El-Sewefy, Omar; Gao, Gen-Sheng; Lafferty, Neal; Meiring, Jason; Zhang, Recoo; Zhu, Cynthia

    2016-03-01

    The 2x nm logic foundry node has many challenges since critical levels are pushed close to the limits of low k1 ArF water immersion lithography. For these levels, improvements in lithographic performance can translate to decreased rework and increased yield. Source Mask Optimization (SMO) is one such route to realize these image fidelity improvements. During SMO, critical layout constructs are intensively optimized in both the mask and source domain, resulting in a solution for maximum lithographic entitlement. From the hardware side, advances in source technology have enabled free-form illumination. The approach allows highly customized illumination, enabling the practical application of SMO sources. The customized illumination sources can be adjusted for maximum versatility. In this paper, we present a study on a critical layer of an advanced foundry logic node using the latest ILT based SMO software, paired with state-of-the-art scanner hardware and intelligent illuminator. Performance of the layer's existing POR source is compared with the ideal SMO result and the installed source as realized on the intelligent illuminator of an NSR-S630D scanner. Both simulation and on-silicon measurements are used to confirm that the performance of the studied layer meets established specifications.

  12. Optimization of hyaluronan-based eye drop formulations.

    PubMed

    Salzillo, Rosanna; Schiraldi, Chiara; Corsuto, Luisana; D'Agostino, Antonella; Filosa, Rosanna; De Rosa, Mario; La Gatta, Annalisa

    2016-11-20

    Hyaluronan (HA) is frequently incorporated in eye drops to extend the pre-corneal residence time, due to its viscosifying and mucoadhesive properties. Hydrodynamic and rheological evaluations of commercial products are first accomplished revealing molecular weights varying from about 360 to about 1200kDa and viscosity values in the range 3.7-24.2mPa s. The latter suggest that most products could be optimized towards resistance to drainage from the ocular surface. Then, a study aiming to maximize the viscosity and mucoadhesiveness of HA-based preparations is performed. The effect of polymer chain length and concentration is investigated. For the whole range of molecular weights encountered in commercial products, the concentration maximizing performance is identified. Such concentration varies from 0.3 (wt%) for a 1100kDa HA up to 1.0 (wt%) for a 250kDa HA, which is 3-fold higher than the highest concentration on the market. The viscosity and mucoadhesion profiles of optimized formulations are superior than commercial products, especially under conditions simulating in vivo blinking. Thus longer retention on the corneal epithelium can be predicted. An enhanced capacity to protect corneal porcine epithelial cells from dehydration is also demonstrated in vitro. Overall, the results predict formulations with improved efficacy.

  13. CFD-Based Design Optimization for Single Element Rocket Injector

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Tucker, Kevin; Papila, Nilay; Shyy, Wei

    2003-01-01

    To develop future Reusable Launch Vehicle concepts, we have conducted design optimization for a single element rocket injector, with overall goals of improving reliability and performance while reducing cost. Computational solutions based on the Navier-Stokes equations, finite rate chemistry, and the k-E turbulence closure are generated with design of experiment techniques, and the response surface method is employed as the optimization tool. The design considerations are guided by four design objectives motivated by the consideration in both performance and life, namely, the maximum temperature on the oxidizer post tip, the maximum temperature on the injector face, the adiabatic wall temperature, and the length of the combustion zone. Four design variables are selected, namely, H2 flow angle, H2 and O2 flow areas with fixed flow rates, and O2 post tip thickness. In addition to establishing optimum designs by varying emphasis on the individual objectives, better insight into the interplay between design variables and their impact on the design objectives is gained. The investigation indicates that improvement in performance or life comes at the cost of the other. Best compromise is obtained when improvements in both performance and life are given equal importance.

  14. Tree-Based Visualization and Optimization for Image Collection.

    PubMed

    Han, Xintong; Zhang, Chongyang; Lin, Weiyao; Xu, Mingliang; Sheng, Bin; Mei, Tao

    2016-06-01

    The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different "images of interest." We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.

  15. [Optimal allocation of irrigation water resources based on systematical strategy].

    PubMed

    Cheng, Shuai; Zhang, Shu-qing

    2015-01-01

    With the development of the society and economy, as well as the rapid increase of population, more and more water is needed by human, which intensified the shortage of water resources. The scarcity of water resources and growing competition of water in different water use sectors reduce water availability for irrigation, so it is significant to plan and manage irrigation water resources scientifically and reasonably for improving water use efficiency (WUE) and ensuring food security. Many investigations indicate that WUE can be increased by optimization of water use. However, present studies focused primarily on a particular aspect or scale, which lack systematic analysis on the problem of irrigation water allocation. By summarizing previous related studies, especially those based on intelligent algorithms, this article proposed a multi-level, multi-scale framework for allocating irrigation water, and illustrated the basic theory of each component of the framework. Systematical strategy of optimal irrigation water allocation can not only control the total volume of irrigation water on the time scale, but also reduce water loss on the spatial scale. It could provide scientific basis and technical support for improving the irrigation water management level and ensuring the food security. PMID:25985685

  16. Optimization of hyaluronan-based eye drop formulations.

    PubMed

    Salzillo, Rosanna; Schiraldi, Chiara; Corsuto, Luisana; D'Agostino, Antonella; Filosa, Rosanna; De Rosa, Mario; La Gatta, Annalisa

    2016-11-20

    Hyaluronan (HA) is frequently incorporated in eye drops to extend the pre-corneal residence time, due to its viscosifying and mucoadhesive properties. Hydrodynamic and rheological evaluations of commercial products are first accomplished revealing molecular weights varying from about 360 to about 1200kDa and viscosity values in the range 3.7-24.2mPa s. The latter suggest that most products could be optimized towards resistance to drainage from the ocular surface. Then, a study aiming to maximize the viscosity and mucoadhesiveness of HA-based preparations is performed. The effect of polymer chain length and concentration is investigated. For the whole range of molecular weights encountered in commercial products, the concentration maximizing performance is identified. Such concentration varies from 0.3 (wt%) for a 1100kDa HA up to 1.0 (wt%) for a 250kDa HA, which is 3-fold higher than the highest concentration on the market. The viscosity and mucoadhesion profiles of optimized formulations are superior than commercial products, especially under conditions simulating in vivo blinking. Thus longer retention on the corneal epithelium can be predicted. An enhanced capacity to protect corneal porcine epithelial cells from dehydration is also demonstrated in vitro. Overall, the results predict formulations with improved efficacy. PMID:27561497

  17. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    PubMed

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  18. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement. Each Bank shall maintain at all times permanent capital in an amount at least equal to the sum of its...

  19. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement. Each Bank shall maintain at all times permanent capital in an amount at least equal to the sum of its...

  20. A school-based intervention for diabetes risk reduction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We examined the effects of a multicomponent, school-based program, addressing risk factors for diabetes among children whose race, or ethnic group and socioeconomic status placed them at high risk for obesity and type 2 diabetes. Using a cluster design, we randomly assigned 42 schools to either a mu...

  1. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk... Oversight. SBA supervises, examines, and regulates, and enforces laws against, SBA Supervised Lenders...

  2. Role of Context in Risk-Based Reasoning

    ERIC Educational Resources Information Center

    Pratt, Dave; Ainley, Janet; Kent, Phillip; Levinson, Ralph; Yogui, Cristina; Kapadia, Ramesh

    2011-01-01

    In this article we report the influence of contextual factors on mathematics and science teachers' reasoning in risk-based decision-making. We examine previous research that presents judgments of risk as being subjectively influenced by contextual factors and other research that explores the role of context in mathematical problem-solving. Our own…

  3. Risk-Based Educational Accountability in Dutch Primary Education

    ERIC Educational Resources Information Center

    Timmermans, A. C.; de Wolf, I. F.; Bosker, R. J.; Doolaard, S.

    2015-01-01

    A recent development in educational accountability is a risk-based approach, in which intensity and frequency of school inspections vary across schools to make educational accountability more efficient and effective by enabling inspectorates to focus on organizations at risk. Characteristics relevant in predicting which schools are "at risk…

  4. Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios S.; Petropulu, Athina P.

    2016-08-01

    We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The \\textit{Markovian} type and the \\textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of \\textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.

  5. Reliability-based robust design optimization of vehicle components, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based design optimization, the reliability sensitivity analysis and robust design method are employed to present a practical and effective approach for reliability-based robust design optimization of vehicle components. A procedure for reliability-based robust design optimization of vehicle components is proposed. Application of the method is illustrated by reliability-based robust design optimization of axle and spring. Numerical results have shown that the proposed method can be trusted to perform reliability-based robust design optimization of vehicle components.

  6. School Based Vision Centers: striving to optimize learning.

    PubMed

    Lyons, Stacy Ayn; Johnson, Catherine; Majzoub, Katherine

    2011-01-01

    The successful delivery of comprehensive pediatric vision care after vision screening referral is a longstanding challenge that has significant implications for child wellness. In response to the many known obstacles that prevent the diagnosis and treatment of vision conditions, School-Based Vision Centers have been established in Framingham, MA and Boston, MA to provide easy access to comprehensive vision care following a failed vision screening. These on-site Vision Centers were developed to improve access to comprehensive vision care and treatment thereby correcting vision conditions that can adversely affect student academic achievement, athletic performance, and self-esteem. This paper highlights the collaboration between two public schools in Massachusetts and The New England Eye Institute and describes a multidisciplinary approach to comprehensive care delivery to high-risk pediatric populations in school-based settings. The ultimate goal of this model is to minimize visual barriers that may impede learning in order to maximize academic success and wellness.

  7. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the

  8. PERSPECTIVE: Technical fixes and climate change: optimizing for risks and consequences

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.

    2010-09-01

    Scientists and society in general are becoming increasingly concerned about the risks of climate change from the emission of greenhouse gases (IPCC 2007). Yet emissions continue to increase (Raupach et al 2007), and achieving reductions soon enough to avoid large and undesirable impacts requires a near-revolutionary global transformation of energy and transportation systems (Hoffert et al 1998). The size of the transformation and lack of an effective societal response have motivated some to explore other quite controversial strategies to mitigate some of the planetary consequences of these emissions. These strategies have come to be known as geoengineering: 'the deliberate manipulation of the planetary environment to counteract anthropogenic climate change' (Keith 2000). Concern about society's inability to reduce emissions has driven a resurgence in interest in geoengineering, particularly following the call for more research in Crutzen (2006). Two classes of geoengineering solutions have developed: (1) methods to draw CO2 out of the atmosphere and sequester it in a relatively benign form; and (2) methods that change the energy flux entering or leaving the planet without modifying CO2 concentrations by, for example, changing the planetary albedo. Only the latter methods are considered here. Summaries of many of the methods, scientific questions, and issues of testing and implementation are discussed in Launder and Thompson (2009) and Royal Society (2009). The increased attention indicates that geoengineering is not a panacea and all strategies considered will have risks and consequences (e.g. Robock 2008, Trenberth and Dai 2007). Recent studies involving comprehensive Earth system models can provide insight into subtle interactions between components of the climate system. For example Rasch et al (2009) found that geoengineering by changing boundary clouds will not simultaneously 'correct' global averaged surface temperature, precipitation, and sea ice to present

  9. Communication: Optimal parameters for basin-hopping global optimization based on Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Shang, C.; Wales, D. J.

    2014-08-01

    A fundamental problem associated with global optimization is the large free energy barrier for the corresponding solid-solid phase transitions for systems with multi-funnel energy landscapes. To address this issue we consider the Tsallis weight instead of the Boltzmann weight to define the acceptance ratio for basin-hopping global optimization. Benchmarks for atomic clusters show that using the optimal Tsallis weight can improve the efficiency by roughly a factor of two. We present a theory that connects the optimal parameters for the Tsallis weighting, and demonstrate that the predictions are verified for each of the test cases.

  10. Communication: Optimal parameters for basin-hopping global optimization based on Tsallis statistics

    SciTech Connect

    Shang, C. Wales, D. J.

    2014-08-21

    A fundamental problem associated with global optimization is the large free energy barrier for the corresponding solid-solid phase transitions for systems with multi-funnel energy landscapes. To address this issue we consider the Tsallis weight instead of the Boltzmann weight to define the acceptance ratio for basin-hopping global optimization. Benchmarks for atomic clusters show that using the optimal Tsallis weight can improve the efficiency by roughly a factor of two. We present a theory that connects the optimal parameters for the Tsallis weighting, and demonstrate that the predictions are verified for each of the test cases.

  11. Further developments in LP-based optimal power flow

    SciTech Connect

    Alsac, O.; Bright, J.; Prais, M.; Stott, B.P )

    1990-08-01

    Over the past twenty five years, the optimal power flow (OPF) approach that has received the most widespread practical application is the one based on linear programming (LP). Special customized LP methods have been utilized primarily for fast reliable security-constrained dispatch using decoupled separable OPF problem formulations. They have been used in power system planning, operations and control. Nevertheless, while the LP approach has a number of important attributes, its range of application in the OPF field has remained somewhat restricted. This paper describes further developments that have transformed the LP approach into a truly general-purpose OPF solver, with computational and other advantages over even recent nonlinear programming (NLP) methods. The nonseparable loss-minimization problem can now be solved, giving the same results as NLP on power systems of any size and type.

  12. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  13. Optimization of arterial age prediction models based in pulse wave

    NASA Astrophysics Data System (ADS)

    Scandurra, A. G.; Meschino, G. J.; Passoni, L. I.; Pra, A. L. Dai; Introzzi, A. R.; Clara, F. M.

    2007-11-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  14. Optimization-based interactive segmentation interface for multiregion problems.

    PubMed

    Baxter, John S H; Rajchl, Martin; Peters, Terry M; Chen, Elvis C S

    2016-04-01

    Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality. PMID:27335892

  15. Collaborative Review: Risk-Based Prostate Cancer Screening

    PubMed Central

    Zhu, Xiaoye; Albertsen, Peter C.; Andriole, Gerald L.; Roobol, Monique J.; Schröder, Fritz H.; Vickers, Andrew J.

    2016-01-01

    Context Widespread mass screening of prostate cancer (PCa) is not recommended because the balance between benefits and harms is still not well established. The achieved mortality reduction comes with considerable harm such as unnecessary biopsies, overdiagnoses, and overtreatment. Therefore, patient stratification with regard to PCa risk and aggressiveness is necessary to identify those men who are at risk and may actually benefit from early detection. Objective This review critically examines the current evidence regarding risk-based PCa screening. Evidence acquisition A search of the literature was performed using the Medline database. Further studies were selected based on manual searches of reference lists and review articles. Evidence synthesis Prostate-specific antigen (PSA) has been shown to be the single most significant predictive factor for identifying men at increased risk of developing PCa. Especially in men with no additional risk factors, PSA alone provides an appropriate marker up to 30 yr into the future. After assessment of an early PSA test, the screening frequency may be determined based on individualized risk. A limited list of additional factors such as age, comorbidity, prostate volume, family history, ethnicity, and previous biopsy status have been identified to modify risk and are important for consideration in routine practice. In men with a known PSA, risk calculators may hold the promise of identifying those who are at increased risk of having PCa and are therefore candidates for biopsy. Conclusions PSA testing may serve as the foundation for a more risk-based assessment. However, the decision to undergo early PSA testing should be a shared one between the patient and his physician based on information balancing its advantages and disadvantages. PMID:22134009

  16. Evidence-based risk assessment and recommendations for physical activity clearance: respiratory disease.

    PubMed

    Eves, Neil D; Davidson, Warren J

    2011-07-01

    The 2 most common respiratory diseases are chronic obstructive pulmonary disease (COPD) and asthma. Growing evidence supports the benefits of exercise for all patients with these diseases. Due to the etiology of COPD and the pathophysiology of asthma, there may be some additional risks of exercise for these patients, and hence accurate risk assessment and clearance is needed before patients start exercising. The purpose of this review was to evaluate the available literature regarding the risks of exercise for patients with respiratory disease and provide evidence-based recommendations to guide the screening process. A systematic review of 4 databases was performed. The literature was searched to identify adverse events specific to exercise. For COPD, 102 randomized controlled trials that involved an exercise intervention were included (n = 6938). No study directly assessed the risk of exercise, and only 15 commented on exercise-related adverse events. For asthma, 30 studies of mixed methodologies were included (n = 1278). One study directly assessed the risk of exercise, and 15 commented on exercise-related adverse events. No exercise-related fatalities were reported. The majority of adverse events in COPD patients were musculoskeletal or cardiovascular in nature. In asthma patients, exercise-induced bronchoconstriction and (or) asthma symptoms were the primary adverse events. There is no direct evidence regarding the risk of exercise for patients with COPD or asthma. However, based on the available literature, it would appear that with adequate screening and optimal medical therapy, the risk of exercise for these respiratory patients is low. PMID:21800949

  17. Evidence-based risk assessment and recommendations for physical activity clearance: respiratory disease.

    PubMed

    Eves, Neil D; Davidson, Warren J

    2011-07-01

    The 2 most common respiratory diseases are chronic obstructive pulmonary disease (COPD) and asthma. Growing evidence supports the benefits of exercise for all patients with these diseases. Due to the etiology of COPD and the pathophysiology of asthma, there may be some additional risks of exercise for these patients, and hence accurate risk assessment and clearance is needed before patients start exercising. The purpose of this review was to evaluate the available literature regarding the risks of exercise for patients with respiratory disease and provide evidence-based recommendations to guide the screening process. A systematic review of 4 databases was performed. The literature was searched to identify adverse events specific to exercise. For COPD, 102 randomized controlled trials that involved an exercise intervention were included (n = 6938). No study directly assessed the risk of exercise, and only 15 commented on exercise-related adverse events. For asthma, 30 studies of mixed methodologies were included (n = 1278). One study directly assessed the risk of exercise, and 15 commented on exercise-related adverse events. No exercise-related fatalities were reported. The majority of adverse events in COPD patients were musculoskeletal or cardiovascular in nature. In asthma patients, exercise-induced bronchoconstriction and (or) asthma symptoms were the primary adverse events. There is no direct evidence regarding the risk of exercise for patients with COPD or asthma. However, based on the available literature, it would appear that with adequate screening and optimal medical therapy, the risk of exercise for these respiratory patients is low.

  18. Chemical Mixture Risk Assessment Additivity-Based Approaches

    EPA Science Inventory

    Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.

  19. How Should Risk-Based Regulation Reflect Current Public Opinion?

    PubMed

    Pollock, Christopher John

    2016-08-01

    Risk-based regulation of novel agricultural products with public choice manifest via traceability and labelling is a more effective approach than the use of regulatory processes to reflect public concerns, which may not always be supported by evidence.

  20. A controller based on Optimal Type-2 Fuzzy Logic: systematic design, optimization and real-time implementation.

    PubMed

    Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B

    2014-09-01

    A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances.

  1. Development and optimization of biofilm based algal cultivation

    NASA Astrophysics Data System (ADS)

    Gross, Martin Anthony

    This dissertation describes research done on biofilm based algal cultivation systems. The system that was developed in this work is the revolving algal biofilm cultivation system (RAB). A raceway-retrofit, and a trough-based pilot-scale RAB system were developed and investigated. Each of the systems significantly outperformed a control raceway pond in side-by-side tests. Furthermore the RAB system was found to require significantly less water than the raceway pond based cultivation system. Lastly a TEA/LCA analysis was conducted to evaluate the economic and life cycle of the RAB cultivation system in comparison to raceway pond. It was found that the RAB system was able to grow algae at a lower cost and was shown to be profitable at a smaller scale than the raceway pond style of algal cultivation. Additionally the RAB system was projected to have lower GHG emissions, and better energy and water use efficiencies in comparison to a raceway pond system. Furthermore, fundamental research was conducted to identify the optimal material for algae to attach on. A total of 28 materials with a smooth surface were tested for initial cell colonization and it was found that the tetradecane contact angle of the materials had a good correlation with cell attachment. The effects of surface texture were evaluated using mesh materials (nylon, polypropylene, high density polyethylene, polyester, aluminum, and stainless steel) with openings ranging from 0.05--6.40 mm. It was found that both surface texture and material composition influence algal attachment.

  2. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  3. Parallel performance optimizations on unstructured mesh-based simulations

    DOE PAGESBeta

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  4. Risk-based assessment of the surety of information systems

    SciTech Connect

    Jansma, R.; Fletcher, S.; Halbgewachs, R.; Lim, J.; Murphy, M.; Sands, P.; Wyss, G.

    1995-03-01

    Correct operation of an information system requires a balance of ``surety`` domains -- access control (confidentiality), integrity, utility, availability, and safety. However, traditional approaches provide little help on how to systematically analyze and balance the combined impact of surety requirements on a system. The key to achieving information system surety is identifying, prioritizing, and mitigating the sources of risk that may lead to system failure. Consequently, the authors propose a risk assessment methodology that provides a framework to guide the analyst in identifying and prioritizing sources of risk and selecting mitigation techniques. The framework leads the analyst to develop a risk-based system model for balancing the surety requirements and quantifying the effectiveness and combined impact of the mitigation techniques. Such a model allows the information system designer to make informed trade-offs based on the most effective risk-reduction measures.

  5. Requirements based system level risk modeling

    NASA Technical Reports Server (NTRS)

    Meshkat, L.; Cornford, S. L.; Feather, M. S.

    2004-01-01

    The problem that we address in this paper is assessing the expected degree of success of the system or mission based on the degree to which each requirement is satisfied and the relative weight of the requirements.

  6. An integrated optimization method for river water quality management and risk analysis in a rural system.

    PubMed

    Liu, J; Li, Y P; Huang, G H; Zeng, X T; Nie, S

    2016-01-01

    In this study, an interval-stochastic-based risk analysis (RSRA) method is developed for supporting river water quality management in a rural system under uncertainty (i.e., uncertainties exist in a number of system components as well as their interrelationships). The RSRA method is effective in risk management and policy analysis, particularly when the inputs (such as allowable pollutant discharge and pollutant discharge rate) are expressed as probability distributions and interval values. Moreover, decision-makers' attitudes towards system risk can be reflected using a restricted resource measure by controlling the variability of the recourse cost. The RSRA method is then applied to a real case of water quality management in the Heshui River Basin (a rural area of China), where chemical oxygen demand (COD), total nitrogen (TN), total phosphorus (TP), and soil loss are selected as major indicators to identify the water pollution control strategies. Results reveal that uncertainties and risk attitudes have significant effects on both pollutant discharge and system benefit. A high risk measure level can lead to a reduced system benefit; however, this reduction also corresponds to raised system reliability. Results also disclose that (a) agriculture is the dominant contributor to soil loss, TN, and TP loads, and abatement actions should be mainly carried out for paddy and dry farms; (b) livestock husbandry is the main COD discharger, and abatement measures should be mainly conducted for poultry farm; (c) fishery accounts for a high percentage of TN, TP, and COD discharges but a has low percentage of overall net benefit, and it may be beneficial to cease fishery activities in the basin. The findings can facilitate the local authority in identifying desired pollution control strategies with the tradeoff between socioeconomic development and environmental sustainability. PMID:26310705

  7. An integrated optimization method for river water quality management and risk analysis in a rural system.

    PubMed

    Liu, J; Li, Y P; Huang, G H; Zeng, X T; Nie, S

    2016-01-01

    In this study, an interval-stochastic-based risk analysis (RSRA) method is developed for supporting river water quality management in a rural system under uncertainty (i.e., uncertainties exist in a number of system components as well as their interrelationships). The RSRA method is effective in risk management and policy analysis, particularly when the inputs (such as allowable pollutant discharge and pollutant discharge rate) are expressed as probability distributions and interval values. Moreover, decision-makers' attitudes towards system risk can be reflected using a restricted resource measure by controlling the variability of the recourse cost. The RSRA method is then applied to a real case of water quality management in the Heshui River Basin (a rural area of China), where chemical oxygen demand (COD), total nitrogen (TN), total phosphorus (TP), and soil loss are selected as major indicators to identify the water pollution control strategies. Results reveal that uncertainties and risk attitudes have significant effects on both pollutant discharge and system benefit. A high risk measure level can lead to a reduced system benefit; however, this reduction also corresponds to raised system reliability. Results also disclose that (a) agriculture is the dominant contributor to soil loss, TN, and TP loads, and abatement actions should be mainly carried out for paddy and dry farms; (b) livestock husbandry is the main COD discharger, and abatement measures should be mainly conducted for poultry farm; (c) fishery accounts for a high percentage of TN, TP, and COD discharges but a has low percentage of overall net benefit, and it may be beneficial to cease fishery activities in the basin. The findings can facilitate the local authority in identifying desired pollution control strategies with the tradeoff between socioeconomic development and environmental sustainability.

  8. Perception of mobile phone and base station risks.

    PubMed

    Siegrist, Michael; Earle, Timothy C; Gutscher, Heinz; Keller, Carmen

    2005-10-01

    Perceptions of risks associated with mobile phones, base stations, and other sources of electromagnetic fields (EMF) were examined. Data from a telephone survey conducted in the German- and French-speaking parts of Switzerland are presented (N = 1,015). Participants assessed both risks and benefits associated with nine different sources of EMF. Trust in the authorities regulating these hazards was assessed as well. In addition, participants answered a set of questions related to attitudes toward EMF and toward mobile phone base stations. According to respondents' assessments, high-voltage transmission lines are the most risky source of EMF. Mobile phones and mobile phone base stations received lower risk ratings. Results showed that trust in authorities was positively associated with perceived benefits and negatively associated with perceived risks. People who use their mobile phones frequently perceived lower risks and higher benefits than people who use their mobile phones infrequently. People who believed they lived close to a base station did not significantly differ in their level of risks associated with mobile phone base stations from people who did not believe they lived close to a base station. Regarding risk regulation, a majority of participants were in favor of fixing limiting values based on the worst-case scenario. Correlations suggest that belief in paranormal phenomena is related to level of perceived risks associated with EMF. Furthermore, people who believed that most chemical substances cause cancer also worried more about EMF than people who did not believe that chemical substances are that harmful. Practical implications of the results are discussed. PMID:16297229

  9. Predicting 10-Year Risk of Fatal Cardiovascular Disease in Germany: An Update Based on the SCORE-Deutschland Risk Charts

    PubMed Central

    Rücker, Viktoria; Keil, Ulrich; Fitzgerald, Anthony P; Malzahn, Uwe; Prugger, Christof; Ertl, Georg; Heuschmann, Peter U; Neuhauser, Hannelore

    2016-01-01

    Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008–11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40–65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk. PMID:27612145

  10. Predicting 10-Year Risk of Fatal Cardiovascular Disease in Germany: An Update Based on the SCORE-Deutschland Risk Charts.

    PubMed

    Rücker, Viktoria; Keil, Ulrich; Fitzgerald, Anthony P; Malzahn, Uwe; Prugger, Christof; Ertl, Georg; Heuschmann, Peter U; Neuhauser, Hannelore

    2016-01-01

    Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008-11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40-65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk. PMID:27612145

  11. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  12. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  13. Development and use of risk-based inspection guides

    SciTech Connect

    Taylor, J.H.; Fresco, A.; Higgins, J.; Usher, J.; Long, S.M.

    1989-06-01

    Risk-based system inspection guides, for nuclear power plants which have been subjected to a probabilistic risk assessment (PRA), have been developed to provide guidance to NRC inspectors in prioritizing their inspection activities. Systems are prioritized, and then dominant component failure modes and human errors within those systems are identified for the above-stated purposes. Examples of applications to specific types of NRC inspection activities are also presented. Thus, the report provides guidance for both the development and use of risk-based system inspection guides. Work is proceeding to develop a method methodology for risk-based guidance for nuclear power plants not subject to a PRA. 18 refs., 1 fig.

  14. Principles of database capability optimization based on Oracle9i

    NASA Astrophysics Data System (ADS)

    Li, Bin; Liu, Jiping; Shi, Lihong

    2008-12-01

    Oracle9i database is a very big and complicated run system whose run efficiency is of great importance to E-government system capability. There are large numbers of basic geo-spatial data and attributed data in the E-government system, such as vector data, grid data, image data, DEM data and statistical data, etc. The large oracle9i database and B/S structure with three layers are adopted during the course of system application. It is very important to optimize the database capability due to using oracle9i database to manage the great capacity of data in the system. This paper proposed some optimizational principles and methods of oracle9i database capability in detail in the aspects of database structure, SQL Sentence and memory assignation. At last, some examples are given to validate the above-mentioned principles and methods. In fact, the optimization of oracle9i database is a long and continually dynamic process with the development of time due to great capacity of different data. It involves a lot of work and need often track diversified statistical targets and analyze the cause of capability change. Different aspects of factors must be synthetically considered in order to enhance the run efficiency of the system. During the course of system construction, DBA must analyze carefully different requirements and configure rationally diversified parameters about database structure, SQL sentence and memory assignation so that the run system based on oracle9i database can be at its best and improve the decision-making efficiency of e-government.

  15. Statistical Mechanics Approximation of Biogeography-Based Optimization.

    PubMed

    Ma, Haiping; Simon, Dan; Fei, Minrui

    2016-01-01

    Biogeography-based optimization (BBO) is an evolutionary algorithm inspired by biogeography, which is the study of the migration of species between habitats. This paper derives a mathematical description of the dynamics of BBO based on ideas from statistical mechanics. Rather than trying to exactly predict the evolution of the population, statistical mechanics methods describe the evolution of statistical properties of the population fitness. This paper uses the one-max problem, which has only one optimum and whose fitness function is the number of 1s in a binary string, to derive equations that predict the statistical properties of BBO each generation in terms of those of the previous generation. These equations reveal the effect of migration and mutation on the population fitness dynamics of BBO. The results obtained in this paper are similar to those for the simple genetic algorithm with selection and mutation. The paper also derives equations for the population fitness dynamics of general separable functions, and we find that the results obtained for separable functions are the same as those for the one-max problem. The statistical mechanics theory of BBO is shown to be in good agreement with simulation. PMID:26172435

  16. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  17. Statistical Mechanics Approximation of Biogeography-Based Optimization.

    PubMed

    Ma, Haiping; Simon, Dan; Fei, Minrui

    2016-01-01

    Biogeography-based optimization (BBO) is an evolutionary algorithm inspired by biogeography, which is the study of the migration of species between habitats. This paper derives a mathematical description of the dynamics of BBO based on ideas from statistical mechanics. Rather than trying to exactly predict the evolution of the population, statistical mechanics methods describe the evolution of statistical properties of the population fitness. This paper uses the one-max problem, which has only one optimum and whose fitness function is the number of 1s in a binary string, to derive equations that predict the statistical properties of BBO each generation in terms of those of the previous generation. These equations reveal the effect of migration and mutation on the population fitness dynamics of BBO. The results obtained in this paper are similar to those for the simple genetic algorithm with selection and mutation. The paper also derives equations for the population fitness dynamics of general separable functions, and we find that the results obtained for separable functions are the same as those for the one-max problem. The statistical mechanics theory of BBO is shown to be in good agreement with simulation.

  18. Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng

    2009-10-01

    The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.

  19. A risk-based approach for a national assessment

    SciTech Connect

    Whelan, Gene; Laniak, Gerard F.

    1998-10-18

    The need for environmental systems modeling is growing rapidly because of the 1) the combination of increasing technical scope and complexity related to questions of risk-based cause and effect and 2) need to explicitly address cost effectiveness in both the development and implementation of environmental regulations. The nature of risk assessments are evolving with their increased complexity in assessing individual sites and collection of sites, addressing regional or national regulatory needs. These assessments require the integration of existing tools and the development of new databases and models, based on a comprehensive and holistic view of the risk assessment problem. To meet these environmental regulatory needs, multiple-media-based assessments are formulated to view and assess risks from a comprehensive environmental systems perspective, crossing the boundaries of several scientific disciplines. Given the consideration and the advanced states of computer hardware and software, it is possible to design a software system that facilitates the development and integration of assessment tools (e.g., databases and models). In this paper, a risk-based approach for supporting national risk assessments is presented. This approach combines 1) databases, 2) multiple media models, combining source-term, fate and transport, exposure, and risk/hazard, and 3) sensitivity/uncertainty capabilities within a software system capable of growing within the science of risk assessment. The design and linkages of the system are discussed. This paper also provides the rationale behind the design of the framework, as there is a recognized need to develop more holistic approaches to risk assessment.

  20. Prevalence of optimal treatment regimens in patients with apparent treatment-resistant hypertension based on office blood pressure in a community-based practice network.

    PubMed

    Egan, Brent M; Zhao, Yumin; Li, Jiexiang; Brzezinski, W Adam; Todoran, Thomas M; Brook, Robert D; Calhoun, David A

    2013-10-01

    Hypertensive patients with clinical blood pressure (BP) uncontrolled on ≥3 antihypertensive medications (ie, apparent treatment-resistant hypertension [aTRH]) comprise ≈28% to 30% of all uncontrolled patients in the United States. However, the proportion receiving these medications in optimal doses is unknown; aTRH is used because treatment adherence and measurement artifacts were not available in electronic record data from our >200 community-based clinics Outpatient Quality Improvement Network. This study sought to define the proportion of uncontrolled hypertensives with aTRH on optimal regimens and clinical factors associated with optimal therapy. During 2007-2010, 468 877 hypertensive patients met inclusion criteria. BP <140/<90 mm Hg defined control. Multivariable logistic regression was used to assess variables independently associated with optimal therapy (prescription of diuretic and ≥2 other BP medications at ≥50% of maximum recommended hypertension doses). Among 468 877 hypertensives, 147 635 (31.5%) were uncontrolled; among uncontrolled hypertensives, 44 684 were prescribed ≥3 BP medications (30.3%), of whom 22 189 (15.0%) were prescribed optimal therapy. Clinical factors independently associated with optimal BP therapy included black race (odds ratio, 1.40 [95% confidence interval, 1.32-1.49]), chronic kidney disease (1.31 [1.25-1.38]), diabetes mellitus (1.30 [1.24-1.37]), and coronary heart disease risk equivalent status (1.29 [1.14-1.46]). Clinicians more often prescribe optimal therapy for aTRH when cardiovascular risk is greater and treatment goals lower. Approximately 1 in 7 of all uncontrolled hypertensives and 1 in 2 with uncontrolled aTRH are prescribed ≥3 BP medications in optimal regimens. Prescribing more optimal pharmacotherapy for uncontrolled hypertensives including aTRH, confirmed with out-of-office BP, could improve hypertension control.

  1. Optimal illumination for visual enhancement based on color entropy evaluation.

    PubMed

    Shen, Junfei; Chang, Shengqian; Wang, Huihui; Zheng, Zhenrong

    2016-08-22

    Object visualization is influenced by the spectral distribution of an illuminant impinging upon it. In this paper, we proposed a color entropy evaluation method to provide the optimal illumination that best helps surgeons distinguish tissue features. The target-specific optimal illumination was obtained by maximizing the color entropy value of our sample tissue, whose spectral reflectance was measured using multispectral imaging. Sample images captured under optimal light were compared with that under commercial white light emitting diodes (3000K, 4000K and 5500K). Results showed images under the optimized illuminant had better visual performance such as more subtle details exhibited. PMID:27557255

  2. Lymphatic filariasis transmission risk map of India, based on a geo-environmental risk model.

    PubMed

    Sabesan, Shanmugavelu; Raju, Konuganti Hari Kishan; Subramanian, Swaminathan; Srivastava, Pradeep Kumar; Jambulingam, Purushothaman

    2013-09-01

    The strategy adopted by a global program to interrupt transmission of lymphatic filariasis (LF) is mass drug administration (MDA) using chemotherapy. India also followed this strategy by introducing MDA in the historically known endemic areas. All other areas, which remained unsurveyed, were presumed to be nonendemic and left without any intervention. Therefore, identification of LF transmission risk areas in the entire country has become essential so that they can be targeted for intervention. A geo-environmental risk model (GERM) developed earlier was used to create a filariasis transmission risk map for India. In this model, a Standardized Filariasis Transmission Risk Index (SFTRI, based on geo-environmental risk variables) was used as a predictor of transmission risk. The relationship between SFTRI and endemicity (historically known) of an area was quantified by logistic regression analysis. The quantified relationship was validated by assessing the filarial antigenemia status of children living in the unsurveyed areas through a ground truth study. A significant positive relationship was observed between SFTRI and the endemicity of an area. Overall, the model prediction of filarial endemic status of districts was found to be correct in 92.8% of the total observations. Thus, among the 190 districts hitherto unsurveyed, as many as 113 districts were predicted to be at risk, and the remaining at no risk. The GERM developed on geographic information system (GIS) platform is useful for LF spatial delimitation on a macrogeographic/regional scale. Furthermore, the risk map developed will be useful for the national LF elimination program by identifying areas at risk for intervention and for undertaking surveillance in no-risk areas.

  3. CT-based 3-D visualisation of secure bone corridors and optimal trajectories for sacroiliac screws.

    PubMed

    Mendel, Thomas; Radetzki, Florian; Wohlrab, David; Stock, Karsten; Hofmann, Gunther Olaf; Noser, Hansrudi

    2013-07-01

    Sacroiliac screw (SI) fixation represents the only minimally invasive method to stabilise unstable injuries of the posterior pelvic ring. However, it is technically demanding. The narrow sacral proportions and a high inter-individual shape variability places adjacent neurovascular structures at potential risk. In this study a CT-based virtual analysis of the iliosacral anatomy in the human pelvis was performed to visualise and analyse 3-D bone corridors for the safe placement of SI-screws in the first sacral segment. Computer-aided calculation of 3-D transverse and general SI-corridors as a sum of all inner-bony 7.3-mm screw positions was done with custom-made software algorithms based on CT-scans of intact human pelvises. Radiomorphometric analysis of 11 CT-DICOM datasets using the software Amira 4.2. Optimal screw tracks allowing the greatest safety distance to the cortex were computed. Corridor geometry and optimal tracks were visualised; measurement data were calculated. A transverse corridor existed in 10 pelvises. In one dysmorphic pelvis, the pedicular height at the level of the 1st neural foramina came below the critical distance of 7.3mm defined by the outer screw diameter. The mean corridor volume was 45.2 cm3, with a length of 14.9cm. The oval cross-section measured 2.8 cm2. The diameter of the optimal screw pathway with the greatest safety distance was 14.2mm. A double cone-shaped general corridor for screw penetration up to the centre of the S1-body was calculated bilaterally for every pelvis. The mean volume was 120.6 cm3 for the left side and 115.8 cm3 for the right side. The iliac entry area measured 49.1 versus 46.0 cm2. Optimal screw tracks were calculated in terms of projected inlet and outlet angles. Multiple optimal screw positions existed for each pelvis. The described method allows an automated 3-D analysis with regard to secure SI-screw corridors even with a high number of CT-datasets. Corridor visualisation and calculation of optimal screw

  4. Reducing the Academic Risks of Over-Optimism: The Longitudinal Effects of Attributional Retraining on Cognition and Achievement

    ERIC Educational Resources Information Center

    Haynes, Tara L.; Ruthig, Joelle C.; Perry, Raymond P.; Stupnisky, Robert H.; Hall, Nathan C.

    2006-01-01

    Although optimism is generally regarded as a positive dispositional characteristic, unmitigated optimism can be problematic. The adaptiveness of overly optimistic expectations in novel or unfamiliar settings is questionable because individuals have little relevant experience on which to base such expectations. In this four-phase longitudinal…

  5. Optimizing human apyrase to treat arterial thrombosis and limit reperfusion injury without increasing bleeding risk.

    PubMed

    Moeckel, Douglas; Jeong, Soon Soeg; Sun, Xiaofeng; Broekman, M Johan; Nguyen, Annie; Drosopoulos, Joan H F; Marcus, Aaron J; Robson, Simon C; Chen, Ridong; Abendschein, Dana

    2014-08-01

    In patients with acute myocardial infarction undergoing reperfusion therapy to restore blood flow through blocked arteries, simultaneous inhibition of platelet P2Y12 receptors with the current standard of care neither completely prevents recurrent thrombosis nor provides satisfactory protection against reperfusion injury. Additionally, these antiplatelet drugs increase the risk of bleeding. To devise a different strategy, we engineered and optimized the apyrase activity of human nucleoside triphosphate diphosphohydrolase-3 (CD39L3) to enhance scavenging of extracellular adenosine diphosphate, a predominant ligand of P2Y12 receptors. The resulting recombinant protein, APT102, exhibited greater than four times higher adenosine diphosphatase activity and a 50 times longer plasma half-life than did native apyrase. Treatment with APT102 before coronary fibrinolysis with intravenous recombinant human tissue-type plasminogen activator in conscious dogs completely prevented thrombotic reocclusion and significantly decreased infarction size by 81% without increasing bleeding time. In contrast, clopidogrel did not prevent coronary reocclusion and increased bleeding time. In a murine model of myocardial reperfusion injury caused by transient coronary artery occlusion, APT102 also decreased infarct size by 51%, whereas clopidogrel was not effective. These preclinical data suggest that APT102 should be tested for its ability to safely and effectively maximize the benefits of myocardial reperfusion therapy in patients with arterial thrombosis.

  6. Reliability-based robust design optimization of vehicle components, Part II: Case studies

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based optimization, the reliability- based sensitivity analysis and robust design method are employed to propose an effective approach for reliability-based robust design optimization of vehicle components in Part I. Applications of the method are further discussed for reliability-based robust optimization of vehicle components in this paper. Examples of axles, torsion bar, coil and composite springs are illustrated for numerical investigations. Results have shown the proposed method is an efficient method for reliability-based robust design optimization of vehicle components.

  7. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  8. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents

    PubMed Central

    Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833–0.857) and girls (AUC: 0.763, 95% CI: 0.731–0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65–5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents’ height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  9. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents.

    PubMed

    Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  10. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents.

    PubMed

    Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups.

  11. Model-based benefit-risk assessment: can Archimedes help?

    PubMed

    Krishna, R

    2009-03-01

    In December 2008, the US Food and Drug Administration issued a new draft Guidance for Industry on Diabetes Mellitus--evaluating cardiovascular risk in new antidiabetic therapies to treat Type 2 diabetes. This guidance comes at a time when recent discussions have focused on delineation of cardiovascular risk reduction for new antidiabetic drugs. Computational tools that can enable early prediction of cardiovascular risk are reviewed with specific reference to Archimedes (Kaiser Permanente), with an aim of proposing a model-based solution and enabling decisions to be made as early as possible in the drug development value chain.

  12. Beamlet based direct aperture optimization for MERT using a photon MLC

    SciTech Connect

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Joosten, A.; Lössl, K.; Aebersold, D. M.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-12-15

    Purpose: A beamlet based direct aperture optimization (DAO) for modulated electron radiotherapy (MERT) using photon multileaf collimator (pMLC) shaped electron fields is developed and investigated. Methods: The Swiss Monte Carlo Plan (SMCP) allows the calculation of dose distributions for pMLC shaped electron beams. SMCP is interfaced with the Eclipse TPS (Varian Medical Systems, Palo Alto, CA) which can thus be included into the inverse treatment planning process for MERT. This process starts with the import of a CT-scan into Eclipse, the contouring of the target and the organs at risk (OARs), and the choice of the initial electron beam directions. For each electron beam, the number of apertures, their energy, and initial shape are defined. Furthermore, the DAO requires dose–volume constraints for the structures contoured. In order to carry out the DAO efficiently, the initial electron beams are divided into a grid of beamlets. For each of those, the dose distribution is precalculated using a modified electron beam model, resulting in a dose list for each beamlet and energy. Then the DAO is carried out, leading to a set of optimal apertures and corresponding weights. These optimal apertures are now converted into pMLC shaped segments and the dose calculation for each segment is performed. For these dose distributions, a weight optimization process is launched in order to minimize the differences between the dose distribution using the optimal apertures and the pMLC segments. Finally, a deliverable dose distribution for the MERT plan is obtained and loaded back into Eclipse for evaluation. For an idealized water phantom geometry, a MERT treatment plan is created and compared to the plan obtained using a previously developed forward planning strategy. Further, MERT treatment plans for three clinical situations (breast, chest wall, and parotid metastasis of a squamous cell skin carcinoma) are created using the developed inverse planning strategy. The MERT plans are

  13. A Third-Generation Evidence Base for Human Spaceflight Risks

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.; Lumpkins, Sarah; Steil, Jennifer; Pellis, Neal; Charles, John

    2014-01-01

    NASA's Human Research Program seeks to understand and mitigate risks to crew health and performance in exploration missions center dot HRP's evidence base consists of an Evidence Report for each HRP risk center dot Three generations of Evidence Reports 1) Review articles + Good content - Limited authorship, infrequent updates 2) Wikipedia articles + Viewed often, very open to contributions - Summary of reviews, very few contributions 3) HRP-controlled wiki articles + Incremental additions to review articles with editorial control

  14. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  15. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  16. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  17. On the Optimization of GLite-Based Job Submission

    NASA Astrophysics Data System (ADS)

    Misurelli, Giuseppe; Palmieri, Francesco; Pardi, Silvio; Veronesi, Paolo

    2011-12-01

    A Grid is a very dynamic, complex and heterogeneous system, whose reliability can be adversely conditioned by several different factors such as communications and hardware faults, middleware bugs or wrong configurations due to human errors. As the infrastructure scales, spanning a large number of sites, each hosting hundreds or thousands of hosts/resources, the occurrence of runtime faults following job submission becomes a very frequent and phenomenon. Therefore, fault avoidance becomes a fundamental aim in modern Grids since the dependability of individual resources spread upon widely distributed computing infrastructures and often used outside of their native organizational boundaries, cannot be guaranteed in any systematic way. Accordingly, we propose a simple job optimization solution based on a user-driven fault avoidance strategy. Such strategy starts from the introduction within the grid information system of several on-line service-monitoring metrics that can be used as specific hints to the workload management system for driving resource discovery operations according to a fault-free resource-scheduling plan. This solution, whose main goal is to minimize the execution time by avoiding execution failures, demonstrated to be very effective in incrementing both the user perceivable quality and the overall grid performance.

  18. Optimizing the Growth of (111) Diamond for Diamond Based Magnetometry

    NASA Astrophysics Data System (ADS)

    Kamp, Eric; Godwin, Patrick; Samarth, Nitin; Snyder, David; de Las Casas, Charles; Awschalom, David D.

    Magnetometers based on nitrogen vacancy (NV) ensembles have recently achieved sub-picotesla sensitivities [Phys. Rev. X 5, 041001(2015)], putting the technique on par with SQUID and MFM magnetometry.Typically these sensors use (100) oriented diamond with NV centers forming along all four (111) crystal orientations.This allows for vector magnetometry, but is a hindrance to the absolute sensitivity. Diamond grown on (111) oriented substrates through microwave plasma enhanced chemical vapor deposition(MP-CVD) provides a promising route in this context since such films can exhibit preferential orientation greater than 99% [Appl. Phys. Lett.104, 102407 (2014)]. An important challenge though is to achieve sufficiently high NV center densities required for enhancing the sensitivity of an NV ensemble magnetometer.We report systematic studies of the MP-CVD growth and characterization of (111) oriented diamond, where we vary growth temperature, methane concentration, and nitrogen doping. For each film we study the Nitrogen to NV ratio, the NV- to NV0 ratio, and alignment percentage to minimize sources of decoherence and ensure preferential alignment. From these measurements we determine the optimal growth parameters for high sensitivity, NV center ensemble scalar magnetometry. Funded by NSF-DMR.

  19. Formation mechanisms and optimization of trap-based positron beams

    NASA Astrophysics Data System (ADS)

    Natisin, M. R.; Danielson, J. R.; Surko, C. M.

    2016-02-01

    Described here are simulations of pulsed, magnetically guided positron beams formed by ejection from Penning-Malmberg-style traps. In a previous paper [M. R. Natisin et al., Phys. Plasmas 22, 033501 (2015)], simulations were developed and used to describe the operation of an existing trap-based beam system and provided good agreement with experimental measurements. These techniques are used here to study the processes underlying beam formation in more detail and under more general conditions, therefore further optimizing system design. The focus is on low-energy beams (˜eV) with the lowest possible spread in energies (<10 meV), while maintaining microsecond pulse durations. The simulations begin with positrons trapped within a potential well and subsequently ejected by raising the bottom of the trapping well, forcing the particles over an end-gate potential barrier. Under typical conditions, the beam formation process is intrinsically dynamical, with the positron dynamics near the well lip, just before ejection, particularly crucial to setting beam quality. In addition to an investigation of the effects of beam formation on beam quality under typical conditions, two other regimes are discussed; one occurring at low positron temperatures in which significantly lower energy and temporal spreads may be obtained, and a second in cases where the positrons are ejected on time scales significantly faster than the axial bounce time, which results in the ejection process being essentially non-dynamical.

  20. A new optimization based approach to experimental combination chemotherapy.

    PubMed

    Pereira, F L; Pedreira, C E; de Sousa, J B

    1995-01-01

    A new approach towards the design of optimal multiple drug experimental cancer chemotherapy is presented. Once an adequate model is specified, an optimization procedure is used in order to achieve an optimal compromise between after treatment tumor size and toxic effects on healthy tissues. In our approach we consider a model including cancer cell population growth and pharmacokinetic dynamics. These elements of the model are essential in order to allow less empirical relationships between multiple drug delivery policies, and their effects on cancer and normal cells. The desired multiple drug dosage schedule is computed by minimizing a customizable cost function subject to dynamic constraints expressed by the model. However, this additional dynamic wealth increases the complexity of the problem which, in general, cannot be solved in a closed form. Therefore, we propose an iterative optimization algorithm of the projected gradient type where the Maximum Principle of Pontryagin is used to select the optimal control policy.

  1. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  2. Task-based optimization of flip angle for texture analysis in MRI

    NASA Astrophysics Data System (ADS)

    Brand, Jonathan F.; Furenlid, Lars R.; Altbach, Maria I.; Galons, Jean-Phillippe; Bhattacharyya, Achyut; Sharma, Puneet; Bhattacharyya, Tulshi; Bilgin, Ali; Martin, Diego R.

    2016-03-01

    Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. The current reference standard for diagnosing HF is biopsy followed by pathologist examination, however this is limited by sampling error and carries risk of complications. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1-5 mm, which approximates the resolution limit of in vivo gadolinium-enhanced magnetic resonance imaging in the delayed phase. We have shown that MRI of formalin fixed human ex vivo liver samples mimic the textural contrast of in vivo Gd-MRI and can be used as MRI phantoms. We have developed local texture analysis that is applied to phantom images, and the results are used to train model observers. The performance of the observer is assessed with the area-under-the-receiveroperator- characteristic curve (AUROC) as the figure of merit. To optimize the MRI pulse sequence, phantoms are scanned with multiple times at a range of flip angles. The flip angle that associated with the highest AUROC is chosen as optimal based on the task of detecting HF.

  3. Biological-based optimization and volumetric modulated arc therapy delivery for stereotactic body radiation therapy

    SciTech Connect

    Diot, Quentin; Kavanagh, Brian; Timmerman, Robert; Miften, Moyed

    2012-01-15

    Purpose: To describe biological-based optimization and Monte Carlo (MC) dose calculation-based treatment planning for volumetric modulated arc therapy (VMAT) delivery of stereotactic body radiation therapy (SBRT) in lung, liver, and prostate patients. Methods: Optimization strategies and VMAT planning parameters using a biological-based optimization MC planning system were analyzed for 24 SBRT patients. Patients received a median dose of 45 Gy [range, 34-54 Gy] for lung tumors in 1-5 fxs and a median dose of 52 Gy [range, 48-60 Gy] for liver tumors in 3-6 fxs. Prostate patients received a fractional dose of 10 Gy in 5 fxs. Biological-cost functions were used for plan optimization, and its dosimetric quality was evaluated using the conformity index (CI), the conformation number (CN), the ratio of the volume receiving 50% of the prescription dose over the planning target volume (Rx/PTV50). The quality and efficiency of the delivery were assessed according to measured quality assurance (QA) passing rates and delivery times. For each disease site, one patient was replanned using physical cost function and compared to the corresponding biological plan. Results: Median CI, CN, and Rx/PTV50 for all 24 patients were 1.13 (1.02-1.28), 0.79 (0.70-0.88), and 5.3 (3.1-10.8), respectively. The median delivery rate for all patients was 410 MU/min with a maximum possible rate of 480 MU/min (85%). Median QA passing rate was 96.7%, and it did not significantly vary with the tumor site. Conclusions: VMAT delivery of SBRT plans optimized using biological-motivated cost-functions result in highly conformal dose distributions. Plans offer shorter treatment-time benefits and provide efficient dose delivery without compromising the plan conformity for tumors in the prostate, lung, and liver, thereby improving patient comfort and clinical throughput. The short delivery times minimize the risk of patient setup and intrafraction motion errors often associated with long SBRT treatment

  4. Risk-based testing of imported animals: A case study for bovine tuberculosis in The Netherlands.

    PubMed

    de Vos, Clazien J; van der Goot, Jeanet A; van Zijderveld, Fred G; Swanenburg, Manon; Elbers, Armin R W

    2015-09-01

    In intra-EU trade, the health status of animals is warranted by issuing a health certificate after clinical inspection in the exporting country. This certificate cannot provide guarantee of absence of infection, especially not for diseases with a long incubation period and no overt clinical signs such as bovine tuberculosis (bTB). The Netherlands are officially free from bTB since 1999. However, frequent reintroductions occurred in the past 15 years through importation of infected cattle. Additional testing (AT) of imported cattle could enhance the probability of detecting an imported bTB infection in an early stage. The goal of this study was to evaluate the effectiveness of risk-based AT for bTB in cattle imported into The Netherlands. A generic stochastic import risk model was developed that simulates introduction of infection into an importing country through importation of live animals. Main output parameters are the number of infected animals that is imported (Ninf), the number of infected animals that is detected by testing (Ndet), and the economic losses incurred by importing infected animals (loss). The model was parameterized for bTB. Model calculations were optimized to either maximize Ndet or to minimize loss. Model results indicate that the risk of bTB introduction into The Netherlands is very high. For the current situation in which Dutch health checks on imported cattle are limited to a clinical inspection of a random sample of 5-10% of imported animals, the calculated annual Ninf=99 (median value). Random AT of 8% of all imported cattle results in Ndet=7 (median value), while the median Ndet=75 if the sampling strategy for AT is optimized to maximize Ndet. However, in the latter scenario, loss is more than twice as large as in the current situation, because only calves are tested for which cost of detection is higher than the expected gain of preventing a possible outbreak. When optimizing the sampling strategy for AT to minimize loss, only breeding

  5. Partially observable Markov decision processes for risk-based screening

    NASA Astrophysics Data System (ADS)

    Mrozack, Alex; Liao, Xuejun; Skatter, Sondre; Carin, Lawrence

    2016-05-01

    A long-term goal for checked baggage screening in airports has been to include passenger information, or at least a predetermined passenger risk level, in the screening process. One method for including that information could be treating the checked baggage screening process as a system-of-systems. This would allow for an optimized policy builder, such as one trained using the methodology of partially observable Markov decision processes (POMDP), to navigate the different sensors available for screening. In this paper we describe the necessary steps to tailor a POMDP for baggage screening, as well as results of simulations for specific screening scenarios.

  6. Optimization-based decision support to assist in logistics planning for hospital evacuations.

    PubMed

    Glick, Roger; Bish, Douglas R; Agca, Esra

    2013-01-01

    The evacuation of the hospital is a very complex process and evacuation planning is an important part of a hospital's emergency management plan. There are numerous factors that affect the evacuation plan including the nature of threat, availability of resources and staff the characteristics of the evacuee population, and risk to patients and staff. The safety and health of patients is of fundamental importance, but safely moving patients to alternative care facilities while under threat is a very challenging task. This article describes the logistical issues and complexities involved in planning and execution of hospital evacuations. Furthermore, this article provides examples of how optimization-based decision support tools can help evacuation planners to better plan for complex evacuations by providing real-world solutions to various evacuation scenarios.

  7. Study on optimization of land use structure based on RS and ecological green equivalent

    NASA Astrophysics Data System (ADS)

    Niu, Jiqiang; Liu, Yaolin; Xu, Feng; Wei, Lijun

    2008-12-01

    The optimization of land use structure is always considered as the quantitative optimization. Moreover, it's the optimization of spatial allocation and different scales. This paper obtains the spatial elements of land use by use the remote sensing technology. The optimization model and convolution algorithm of optimization is proposed based on remote sensing and ecological green equivalent. We can use these model and algorithm to optimize the data of land use structure from multi-scales for every region which do not rely on the administrative boundaries, and they are evaluated by the image data of Huangpi which obtain from landsat7 in 2005.The result indicates that the method can be applied to optimize the land use structure for actual land use planning. They can realize the multi-scales land use structure optimization for each region by dynamic control based on the RS and the ecological green equivalent. The reasonable and accuracy is improved in land use planning.

  8. PERSPECTIVE: Technical fixes and climate change: optimizing for risks and consequences

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.

    2010-09-01

    Scientists and society in general are becoming increasingly concerned about the risks of climate change from the emission of greenhouse gases (IPCC 2007). Yet emissions continue to increase (Raupach et al 2007), and achieving reductions soon enough to avoid large and undesirable impacts requires a near-revolutionary global transformation of energy and transportation systems (Hoffert et al 1998). The size of the transformation and lack of an effective societal response have motivated some to explore other quite controversial strategies to mitigate some of the planetary consequences of these emissions. These strategies have come to be known as geoengineering: 'the deliberate manipulation of the planetary environment to counteract anthropogenic climate change' (Keith 2000). Concern about society's inability to reduce emissions has driven a resurgence in interest in geoengineering, particularly following the call for more research in Crutzen (2006). Two classes of geoengineering solutions have developed: (1) methods to draw CO2 out of the atmosphere and sequester it in a relatively benign form; and (2) methods that change the energy flux entering or leaving the planet without modifying CO2 concentrations by, for example, changing the planetary albedo. Only the latter methods are considered here. Summaries of many of the methods, scientific questions, and issues of testing and implementation are discussed in Launder and Thompson (2009) and Royal Society (2009). The increased attention indicates that geoengineering is not a panacea and all strategies considered will have risks and consequences (e.g. Robock 2008, Trenberth and Dai 2007). Recent studies involving comprehensive Earth system models can provide insight into subtle interactions between components of the climate system. For example Rasch et al (2009) found that geoengineering by changing boundary clouds will not simultaneously 'correct' global averaged surface temperature, precipitation, and sea ice to present

  9. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  10. Relationship of optimism and suicidal ideation in three groups of patients at varying levels of suicide risk.

    PubMed

    Huffman, Jeff C; Boehm, Julia K; Beach, Scott R; Beale, Eleanor E; DuBois, Christina M; Healy, Brian C

    2016-06-01

    Optimism has been associated with reduced suicidal ideation, but there have been few studies in patients at high suicide risk. We analyzed data from three study populations (total N = 319) with elevated risk of suicide: (1) patients with a recent acute cardiovascular event, (2) patients hospitalized for heart disease who had depression or an anxiety disorder, and (3) patients psychiatrically hospitalized for suicidal ideation or following a suicide attempt. For each study we analyzed the association between optimism (measured by the Life-Orientation Test-Revised) and suicidal ideation, and then completed an exploratory random effects meta-analysis of the findings to synthesize this data. The meta-analysis of the three studies showed that higher levels of self-reported optimism were associated with a lower likelihood of suicidal ideation (odds ratio [OR] = .89, 95% confidence interval [CI] = .85-.95, z = 3.94, p < .001), independent of age, gender, and depressive symptoms. This association held when using the subscales of the Life Orientation Test-Revised scale that measured higher optimism (OR = .84, 95% CI = .76-.92, z = 3.57, p < .001) and lower pessimism (OR = .83, 95% CI = .75-.92], z = 3.61, p < .001). These results also held when suicidal ideation was analyzed as an ordinal variable. Our findings suggest that optimism may be associated with a lower risk of suicidal ideation, above and beyond the effects of depressive symptoms, for a wide range of patients with clinical conditions that place them at elevated risk for suicide.

  11. A simple data base for identification of risk profiles

    SciTech Connect

    Munganahalli, D.

    1996-12-31

    Sedco Forex is a drilling contractor that operates approximately 80 rigs on land and offshore worldwide. The HSE management system developed by Sedco Forex is an effort to prevent accidents and minimize losses. An integral part of the HSE management system is establishing risk profiles and thereby minimizing risk and reducing loss exposures. Risk profiles are established based on accident reports, potential accident reports and other risk identification reports (RIR) like the Du Pont STOP system. A rig could fill in as many as 30 accident reports, 30 potential accident reports and 500 STOP cards each year. Statistics are important for an HSE management system, since they are indicators of success or failure of HSE systems. It is however difficult to establish risk profiles based on statistical information, unless tools are available at the rig site to aid with the analysis. Risk profiles are then used to identify important areas in the operation that may require specific attention to minimize the loss exposure. Programs to address the loss exposure can then be identified and implemented with either a local or corporate approach. In January 1995, Sedco Forex implemented a uniform HSE Database on all the rigs worldwide. In one year companywide, the HSE database would contain information on approximately 500 accident and potential accident reports, and 10,000 STOP cards. This paper demonstrates the salient features of the database and describes how it has helped in establishing key risk profiles. It also shows a recent example of how risk profiles have been established at the corporate level and used to identify the key contributing factors to hands and finger injuries. Based on this information, a campaign was launched to minimize the frequency of occurrence and associated loss attributed to hands and fingers accidents.

  12. A novel force field parameter optimization method based on LSSVR for ECEPP.

    PubMed

    Liu, Yunling; Tao, Lan; Lu, Jianjun; Xu, Shuo; Ma, Qin; Duan, Qingling

    2011-03-23

    In this paper, we propose a novel force field parameter optimization method based on LSSVR and optimize the torsion energy parameters of ECEPP force field. In this method force field parameter optimization problem is turned into a support vector regression problem. Protein samples for regression model training are chosen from Protein Data Bank. The experiments show that the optimized force-field parameters make both α-helix and β-hairpin structures more consistent with the experimental implications than the original parameters.

  13. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  14. The spatial optimism model research for the regional land use based on the ecological constraint

    NASA Astrophysics Data System (ADS)

    XU, K.; Lu, J.; Chi, Y.

    2013-12-01

    The study focuses on the Yunnan-Guizhou (i.e. Yunnan province and Guizhou province) Plateau in China. Since the Yunnan-Guizhou region consists of closed basins, the land resources suiting for development are in a shortage, and the ecological problems in the area are quite complicated. In such circumstance, in order to get the applicable basins area and distribution, certain spatial optimism model is needed. In this research, Digital Elevation Model (DEM) and land use data are used to get the boundary rules of the basins distribution. Furthermore, natural risks, ecological risks and human-made ecological risks are integrated to be analyzed. Finally, the spatial overlay analysis method is used to model the developable basins area and distribution for industries and urbanization. The study process can be divided into six steps. First, basins and their distribution need to be recognized. In this way, the DEM data is used to extract the geomorphology characteristics. The plaque regions with gradient under eight degrees are selected. Among these regions, the total area of the plaque with the area above 8 km2 is 54,000 km2, 10% of the total area. These regions are selected to the potential application of industries and urbanization. In the later five steps, analyses are aimed at these regions. Secondly, the natural risks are analyzed. The conditions of the earthquake, debris flow and rainstorm and flood are combined to classify the natural risks. Thirdly, the ecological risks are analyzed containing the ecological sensibility and ecosystem service function importance. According to the regional ecologic features, the sensibility containing the soil erosion, acid rain, stony desertification and survive condition factors is derived and classified according to the medium value to get the ecological sensibility partition. The ecosystem service function importance is classified and divided considering the biology variation protection and water conservation factors. The fourth

  15. Risk-based audit selection of dairy farms.

    PubMed

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. PMID:24290823

  16. Risk-based audit selection of dairy farms.

    PubMed

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently.

  17. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  18. Optimal planning of LEO active debris removal based on hybrid optimal control theory

    NASA Astrophysics Data System (ADS)

    Yu, Jing; Chen, Xiao-qian; Chen, Li-hu

    2015-06-01

    The mission planning of Low Earth Orbit (LEO) active debris removal problem is studied in this paper. Specifically, the Servicing Spacecraft (SSc) and several debris exist on near-circular near-coplanar LEOs. The SSc should repeatedly rendezvous with the debris, and de-orbit them until all debris are removed. Considering the long-duration effect of J2 perturbation, a linear dynamics model is used for each rendezvous. The purpose of this paper is to find the optimal service sequence and rendezvous path with minimum total rendezvous cost (Δv) for the whole mission, and some complex constraints (communication time window constraint, terminal state constraint, and time distribution constraint) should be satisfied meanwhile. Considering this mission as a hybrid optimal control problem, a mathematical model is proposed, as well as the solution method. The proposed approach is demonstrated by a typical active debris removal problem. Numerical experiments show that (1) the model and solution method proposed in this paper can effectively address the planning problem of LEO debris removal; (2) the communication time window constraint and the J2 perturbation have considerable influences on the optimization results; and (3) under the same configuration, some suboptimal sequences are equivalent to the optimal one since their difference in Δv cost is very small.

  19. Risk-based principles for defining and managing water security

    PubMed Central

    Hall, Jim; Borgomeo, Edoardo

    2013-01-01

    The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors. PMID:24080616

  20. Risk-based principles for defining and managing water security.

    PubMed

    Hall, Jim; Borgomeo, Edoardo

    2013-11-13

    The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors. PMID:24080616

  1. Risk-based principles for defining and managing water security.

    PubMed

    Hall, Jim; Borgomeo, Edoardo

    2013-11-13

    The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors.

  2. Optimal attack strategy of complex networks based on tabu search

    NASA Astrophysics Data System (ADS)

    Deng, Ye; Wu, Jun; Tan, Yue-jin

    2016-01-01

    The problem of network disintegration has broad applications and recently has received growing attention, such as network confrontation and disintegration of harmful networks. This paper presents an optimized attack strategy model for complex networks and introduces the tabu search into the network disintegration problem to identify the optimal attack strategy, which is a heuristic optimization algorithm and rarely applied to the study of network robustness. The efficiency of the proposed solution was verified by comparing it with other attack strategies used in various model networks and real-world network. Numerical experiments suggest that our solution can improve the effect of network disintegration and that the "best" choice for node failure attacks can be identified through global searches. Our understanding of the optimal attack strategy may also shed light on a new property of the nodes within network disintegration and deserves additional study.

  3. Design Optimization of Coronary Stent Based on Finite Element Models

    PubMed Central

    Qiu, Tianshuang; Zhu, Bao; Wu, Jinying

    2013-01-01

    This paper presents an effective optimization method using the Kriging surrogate model combing with modified rectangular grid sampling to reduce the stent dogboning effect in the expansion process. An infilling sampling criterion named expected improvement (EI) is used to balance local and global searches in the optimization iteration. Four commonly used finite element models of stent dilation were used to investigate stent dogboning rate. Thrombosis models of three typical shapes are built to test the effectiveness of optimization results. Numerical results show that two finite element models dilated by pressure applied inside the balloon are available, one of which with the artery and plaque can give an optimal stent with better expansion behavior, while the artery and plaque unincluded model is more efficient and takes a smaller amount of computation. PMID:24222743

  4. Practical risk-based decision making: Good decisions made efficiently

    SciTech Connect

    Haire, M.J.; Guthrie, V.; Walker, D.; Singer, R.

    1995-12-01

    The Robotics and Process Systems Division of the Oak Ridge National Laboratory and the Westinghouse Savannah River Company have teamed with JBF Associates, Inc. to address risk-based robotic planning. The objective of the project is to provide systematic, risk-based relative comparisons of competing alternatives for solving clean-up problems at DOE facilities. This paper presents the methodology developed, describes the software developed to efficiently apply the methodology, and discusses the results of initial applications for DOE. The paper also addresses current work in applying the approach to problems in other industries (including an example from the hydrocarbon processing industry).

  5. Long-term Failure Prediction based on an ARP Model of Global Risk Network

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Moussawi, Alaa; Szymanski, Boleslaw; Korniss, Gyorgy

    Risks that threaten modern societies form an intricately interconnected network. Hence, it is important to understand how risk materializations in distinct domains influence each other. In the paper, we study the global risks network defined by World Economic Forum experts in the form of Stochastic Block Model. We model risks as Alternating Renewal Processes with variable intensities driven by hidden values of exogenous and endogenous failure probabilities. Based on the expert assessments and historical status of each risk, we use Maximum Likelihood Evaluation to find the optimal model parameters and demonstrate that the model considering network effects significantly outperforms the others. In the talk, we discuss how the model can be used to provide quantitative means for measuring interdependencies and materialization of risks in the network. We also present recent results of long-term predictions in the form of predicated distributions of materializations over various time periods. Finally we show how the simulation of ARP's enables us to probe limits of the predictability of the system parameters from historical data and ability to recover hidden variable. Supported in part by DTRA, ARL NS-CTA.

  6. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    SciTech Connect

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe; Slysz, Gordon W.; Payne, Samuel H.; Rodland, Karin D.; Liu, Tao; Smith, Richard D.

    2015-12-26

    Comprehensive MS analysis of peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and their utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we set off by evaluating the results of several popular MS/MS database search engines including MS-GF+, SEQUEST and MS-Align+ for peptidomics data analysis, followed by identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our result demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from the MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing value for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage than AMT. Taken together, we propose an optimal informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT) for identification and label-free quantification for high-throughput, comprehensive and quantitative peptidomics analysis.

  7. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    NASA Astrophysics Data System (ADS)

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe; Slysz, Gordon W.; Payne, Samuel H.; Rodland, Karin D.; Liu, Tao; Smith, Richard D.

    2015-12-01

    The comprehensive MS analysis of the peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and its utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation and related platforms, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however, an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we began by evaluating the results of several popular MS/MS database search engines, including MS-GF+, SEQUEST, and MS-Align+, for peptidomics data analysis, followed by identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our results demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing data for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage. Taken together, we propose an optimized informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT tag) approaches for identification and label-free quantification for high-throughput, comprehensive, and quantitative peptidomics analysis.

  8. Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling

    PubMed Central

    Moss, Darren Michael; Siccardi, Marco

    2014-01-01

    The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481

  9. OMIGA: Optimized Maker-Based Insect Genome Annotation.

    PubMed

    Liu, Jinding; Xiao, Huamei; Huang, Shuiqing; Li, Fei

    2014-08-01

    Insects are one of the largest classes of animals on Earth and constitute more than half of all living species. The i5k initiative has begun sequencing of more than 5,000 insect genomes, which should greatly help in exploring insect resource and pest control. Insect genome annotation remains challenging because many insects have high levels of heterozygosity. To improve the quality of insect genome annotation, we developed a pipeline, named Optimized Maker-Based Insect Genome Annotation (OMIGA), to predict protein-coding genes from insect genomes. We first mapped RNA-Seq reads to genomic scaffolds to determine transcribed regions using Bowtie, and the putative transcripts were assembled using Cufflink. We then selected highly reliable transcripts with intact coding sequences to train de novo gene prediction software, including Augustus. The re-trained software was used to predict genes from insect genomes. Exonerate was used to refine gene structure and to determine near exact exon/intron boundary in the genome. Finally, we used the software Maker to integrate data from RNA-Seq, de novo gene prediction, and protein alignment to produce an official gene set. The OMIGA pipeline was used to annotate the draft genome of an important insect pest, Chilo suppressalis, yielding 12,548 genes. Different strategies were compared, which demonstrated that OMIGA had the best performance. In summary, we present a comprehensive pipeline for identifying genes in insect genomes that can be widely used to improve the annotation quality in insects. OMIGA is provided at http://ento.njau.edu.cn/omiga.html . PMID:24609470

  10. Prototype Biology-Based Radiation Risk Module Project

    NASA Technical Reports Server (NTRS)

    Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice

    2015-01-01

    Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.

  11. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  12. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  13. Optimality criteria-based topology optimization of a bi-material model for acoustic-structural coupled systems

    NASA Astrophysics Data System (ADS)

    Shang, Linyuan; Zhao, Guozhong

    2016-06-01

    This article investigates topology optimization of a bi-material model for acoustic-structural coupled systems. The design variables are volume fractions of inclusion material in a bi-material model constructed by the microstructure-based design domain method (MDDM). The design objective is the minimization of sound pressure level (SPL) in an interior acoustic medium. Sensitivities of SPL with respect to topological design variables are derived concretely by the adjoint method. A relaxed form of optimality criteria (OC) is developed for solving the acoustic-structural coupled optimization problem to find the optimum bi-material distribution. Based on OC and the adjoint method, a topology optimization method to deal with large calculations in acoustic-structural coupled problems is proposed. Numerical examples are given to illustrate the applications of topology optimization for a bi-material plate under a low single-frequency excitation and an aerospace structure under a low frequency-band excitation, and to prove the efficiency of the adjoint method and the relaxed form of OC.

  14. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms

    PubMed Central

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart’s predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do. PMID:26352405

  15. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do. PMID:26352405

  16. Genetic algorithm based optimization of pulse profile for MOPA based high power fiber lasers

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Tang, Ming; Shi, Jun; Fu, Songnian; Li, Lihua; Liu, Ying; Cheng, Xueping; Liu, Jian; Shum, Ping

    2015-03-01

    Although the Master Oscillator Power-Amplifier (MOPA) based fiber laser has received much attention for laser marking process due to its large tunabilty of pulse duration (from 10ns to 1ms), repetition rate (100Hz to 500kHz), high peak power and extraordinary heat dissipating capability, the output pulse deformation due to the saturation effect of fiber amplifier is detrimental for many applications. We proposed and demonstrated that, by utilizing Genetic algorithm (GA) based optimization technique, the input pulse profile from the master oscillator (current-driven laser diode) could be conveniently optimized to achieve targeted output pulse shape according to real parameters' constraints. In this work, an Yb-doped high power fiber amplifier is considered and a 200ns square shaped pulse profile is the optimization target. Since the input pulse with longer leading edge and shorter trailing edge can compensate the saturation effect, linear, quadratic and cubic polynomial functions are used to describe the input pulse with limited number of unknowns(<5). Coefficients of the polynomial functions are the optimization objects. With reasonable cost and hardware limitations, the cubic input pulse with 4 coefficients is found to be the best as the output amplified pulse can achieve excellent flatness within the square shape. Considering the bandwidth constraint of practical electronics, we examined high-frequency component cut-off effect of input pulses and found that the optimized cubic input pulses with 300MHz bandwidth is still quite acceptable to satisfy the requirement for the amplified output pulse and it is feasible to establish such a pulse generator in real applications.

  17. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  18. Optimizing management of metabolic syndrome to reduce risk: focus on life-style.

    PubMed

    Bianchi, Cristina; Penno, Giuseppe; Daniele, Giuseppe; Benzi, Luca; Del Prato, Stefano; Miccoli, Roberto

    2008-06-01

    The prevalence of metabolic syndrome (MS) is increasing all over the world and its incidence is expected to rise in the next years. Although genetic predisposition appears to play an important role in the regulation of metabolic parameters and in particular of body weight, the rapid increase in the prevalence of obesity and MS suggests that ecological factors (social, economic, cultural and physical environment) are promoting those conditions in susceptible individuals. People with MS are at increased risk of type 2 diabetes and cardiovascular disease and therefore they represent a priority target for preventive strategies. Life-style modifications based on healthy diet and increased physical activity are an effective preventing and therapeutic approach. Unfortunately, implementation of life-style modification and maintenance of effects is a difficult task both at personal and social level, thus drug therapy can be taken into account.

  19. Risk management and maintenance optimization of nuclear reactor cooling piping system

    NASA Astrophysics Data System (ADS)

    Augé, L.; Capra, B.; Lasne, M.; Bernard, O.; Bénéfice, P.; Comby, R.

    2006-11-01

    Seaside nuclear power plants have to face the ageing of nuclear reactor cooling piping systems. In order to minimize the duration of the production unit shutdown, maintenance operations have to be planned well in advance. In a context where owners of infrastructures tend to extend the life span of their goods while having to keep the safety level maximum, Oxand brings its expertise and know-how in management of infrastructures life cycle. A dedicated methodology relies on several modules that all participate in fixing network optimum replacement dates: expertise on ageing mechanisms (corrosion, cement degradation...) and the associated kinetics, expertise on impacts of ageing on functional integrity of piping systems, predictive simulation based on experience feedback, development of monitoring techniques focused on actual threats. More precisely, Oxand has designed a patented monitoring technique based on optic fiber sensors, which aims at controlling the deterioration level of piping systems. This preventive maintenance enables the owner to determine criteria for network replacement based on degradation impacts. This approach helps the owner justify his maintenance strategy and allows him to demonstrate the management of safety level. More generally, all monitoring techniques used by the owners are developed and coupled to predictive simulation tools, notably thanks to processes based on Bayesian approaches. Methodologies to evaluate and optimize operation budgets, depending on predictions of future functional deterioration and available maintenance solutions are also developed and applied. Finally, all information related to infrastructure ageing and available maintenance options are put together to reach the right solution for safe and performing infrastructure management.

  20. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect

    Mankamo, T.; Kim, I.S.; Samanta, P.K.

    1992-12-31

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  1. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect

    Mankamo, T. ); Kim, I.S.; Samanta, P.K. )

    1992-01-01

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  2. Time-based collision risk modeling for air traffic management

    NASA Astrophysics Data System (ADS)

    Bell, Alan E.

    Since the emergence of commercial aviation in the early part of last century, economic forces have driven a steadily increasing demand for air transportation. Increasing density of aircraft operating in a finite volume of airspace is accompanied by a corresponding increase in the risk of collision, and in response to a growing number of incidents and accidents involving collisions between aircraft, governments worldwide have developed air traffic control systems and procedures to mitigate this risk. The objective of any collision risk management system is to project conflicts and provide operators with sufficient opportunity to recognize potential collisions and take necessary actions to avoid them. It is therefore the assertion of this research that the currency of collision risk management is time. Future Air Traffic Management Systems are being designed around the foundational principle of four dimensional trajectory based operations, a method that replaces legacy first-come, first-served sequencing priorities with time-based reservations throughout the airspace system. This research will demonstrate that if aircraft are to be sequenced in four dimensions, they must also be separated in four dimensions. In order to separate aircraft in four dimensions, time must emerge as the primary tool by which air traffic is managed. A functional relationship exists between the time-based performance of aircraft, the interval between aircraft scheduled to cross some three dimensional point in space, and the risk of collision. This research models that relationship and presents two key findings. First, a method is developed by which the ability of an aircraft to meet a required time of arrival may be expressed as a robust standard for both industry and operations. Second, a method by which airspace system capacity may be increased while maintaining an acceptable level of collision risk is presented and demonstrated for the purpose of formulating recommendations for procedures

  3. A Study on the Optimal Generation Mix Based on Portfolio Theory with Considering the Basic Condition for Power Supply

    NASA Astrophysics Data System (ADS)

    Kato, Moritoshi; Zhou, Yicheng

    This paper presents a novel method to analyze the optimal generation mix based on portfolio theory with considering the basic condition for power supply, which means that electricity generation corresponds with load curve. The optimization of portfolio is integrated with the calculation of a capacity factor of each generation in order to satisfy the basic condition for power supply. Besides, each generation is considered to be an asset, and risks of the generation asset both in its operation period and construction period are considered. Environmental measures are evaluated through restriction of CO2 emissions, which are indicated by CO2 price. Numerical examples show the optimal generation mix according to risks such as the deviation of capacity factor of nuclear power or restriction of CO2 emissions, the possibility of introduction of clean coal technology (IGCC, CCS) or renewable energy, and so on. The results of this work will be possibly applied as setting the target of the generation mix for the future according to prospects of risks of each generation and restrictions of CO2 emissions.

  4. A Comprehensive Propagation Prediction Model Comprising Microfacet Based Scattering and Probability Based Coverage Optimization Algorithm

    PubMed Central

    Kausar, A. S. M. Zahid; Wo, Lau Chun

    2014-01-01

    Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733

  5. A comprehensive propagation prediction model comprising microfacet based scattering and probability based coverage optimization algorithm.

    PubMed

    Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan

    2014-01-01

    Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733

  6. Patient specific optimization-based treatment planning for catheter-based ultrasound hyperthermia and thermal ablation

    NASA Astrophysics Data System (ADS)

    Prakash, Punit; Chen, Xin; Wootton, Jeffery; Pouliot, Jean; Hsu, I.-Chow; Diederich, Chris J.

    2009-02-01

    A 3D optimization-based thermal treatment planning platform has been developed for the application of catheter-based ultrasound hyperthermia in conjunction with high dose rate (HDR) brachytherapy for treating advanced pelvic tumors. Optimal selection of applied power levels to each independently controlled transducer segment can be used to conform and maximize therapeutic heating and thermal dose coverage to the target region, providing significant advantages over current hyperthermia technology and improving treatment response. Critical anatomic structures, clinical target outlines, and implant/applicator geometries were acquired from sequential multi-slice 2D images obtained from HDR treatment planning and used to reconstruct patient specific 3D biothermal models. A constrained optimization algorithm was devised and integrated within a finite element thermal solver to determine a priori the optimal applied power levels and the resulting 3D temperature distributions such that therapeutic heating is maximized within the target, while placing constraints on maximum tissue temperature and thermal exposure of surrounding non-targeted tissue. This optimizationbased treatment planning and modeling system was applied on representative cases of clinical implants for HDR treatment of cervix and prostate to evaluate the utility of this planning approach. The planning provided significant improvement in achievable temperature distributions for all cases, with substantial increase in T90 and thermal dose (CEM43T90) coverage to the hyperthermia target volume while decreasing maximum treatment temperature and reducing thermal dose exposure to surrounding non-targeted tissues and thermally sensitive rectum and bladder. This optimization based treatment planning platform with catheter-based ultrasound applicators is a useful tool that has potential to significantly improve the delivery of hyperthermia in conjunction with HDR brachytherapy. The planning platform has been extended

  7. A School-Based Suicide Risk Assessment Protocol

    ERIC Educational Resources Information Center

    Boccio, Dana E.

    2015-01-01

    Suicide remains the third leading cause of death among young people in the United States. Considering that youth who contemplate suicide generally exhibit warning signs before engaging in lethal self-harm, school-based mental health professionals can play a vital role in identifying students who are at risk for suicidal behavior. Nevertheless, the…

  8. Research needs for risk-informed, performance-based regulation

    SciTech Connect

    Cloninger, T.H.

    1997-01-01

    This presentation was made by an executive in the utility which operates the South Texas Project reactors, and summarizes their perspective on probabilistic safety analysis, risk-based operation, and risk-based regulation. They view it as a tool to help them better apply their resources to maintain the level of safety necessary to protect the public health and safety. South Texas served as one of the pilot plants for the application of risk-based regulation to the maintenance rule. The author feels that the process presents opportunities as well as challenges. Among the opportunities is the involvement of more people in the process, and the sense of investment they take in the decisions, in addition to the insight they can offer. In the area of challenges there is the need for better understanding of how to apply what already is known on problems, rather than essentially reinventing the wheel to address problems. Research is needed to better understand when some events are not truly of a significant safety concern. The demarcation between deterministic decisions and the appropriate application of risk-based decisions must be better defined, for the sake of the operator as well as the public observing plant operation.

  9. How Should Risk-Based Regulation Reflect Current Public Opinion?

    PubMed

    Pollock, Christopher John

    2016-08-01

    Risk-based regulation of novel agricultural products with public choice manifest via traceability and labelling is a more effective approach than the use of regulatory processes to reflect public concerns, which may not always be supported by evidence. PMID:27266813

  10. Optimal Design of Pipeline Based on the Shortest Path

    NASA Astrophysics Data System (ADS)

    Chu, Fei-xue; Chen, Shi-yi

    Design and operation of long-distance pipeline are complex engineering tasks. Even small improvement in the design of a pipeline system can lead to substantial savings in capital. In this paper, graph theory was used to analyze the problem of pipeline optimal design. The candidate pump station locations were taken as the vertexes and the total cost of the pipeline system between the two vertexes corresponded to the edge weight. An algorithm recursively calling the Dijkstra algorithm was designed and analyzed to obtain N shortest paths. The optimal process program and the quasi-optimal process programs were obtained at the same time, which could be used in decision-making. The algorithm was tested by a real example. The result showed that it could meet the need of real application.

  11. Optimized Hypergraph Clustering-based Network Security Log Mining*

    NASA Astrophysics Data System (ADS)

    Che, Jianhua; Lin, Weimin; Yu, Yong; Yao, Wei

    With network's growth and popularization, network security experts are facing bigger and bigger network security log. Network security log is a kind of valuable and important information recording various network behaviors, and has the features of large-scale and high dimension. Therefore, how to analyze these network security log to enhance the security of network becomes the focus of many researchers. In this paper, we first design a frequent attack sequencebased hypergraph clustering algorithm to mine the network security log, and then improve this algorithm with a synthetic measure of hyperedge weight and two optimization functions of clustering result. The experimental results show that the synthetic measure and optimization functions can promote significantly the coverage and precision of clustering result. The optimized hypergraph clustering algorithm provides a data analyzing method for intrusion detecting and active forewarning of network.

  12. Optimal qudit operator bases for efficient characterization of quantum gates

    NASA Astrophysics Data System (ADS)

    Reich, Daniel M.; Gualdi, Giulia; Koch, Christiane P.

    2014-09-01

    For target unitary operations which preserve the basis of measurement operators, the average fidelity of the corresponding N-qubit gate can be determined efficiently. That is, the number of required experiments is independent of system size and the classical computational resources scale only polynomially in the number N of qubits. Here we address the question of how to optimally choose the measurement basis for fidelity estimation when replacing two-level qubits by d-level qudits. We define optimality in terms of the maximal number of unitaries that preserve the measurement basis. Our definition allows us to construct the optimal measurement basis in terms of their spectra and eigenbases: the measurement operators are unitaries with d-nary spectrum and partition into d+1 Abelian groups whose eigenbases are mutually unbiased.

  13. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  14. Agreement in cardiovascular risk rating based on anthropometric parameters

    PubMed Central

    Dantas, Endilly Maria da Silva; Pinto, Cristiane Jordânia; Freitas, Rodrigo Pegado de Abreu; de Medeiros, Anna Cecília Queiroz

    2015-01-01

    Objective To investigate the agreement in evaluation of risk of developing cardiovascular diseases based on anthropometric parameters in young adults. Methods The study included 406 students, measuring weight, height, and waist and neck circumferences. Waist-to-height ratio and the conicity index. The kappa coefficient was used to assess agreement in risk classification for cardiovascular diseases. The positive and negative specific agreement values were calculated as well. The Pearson chi-square (χ2) test was used to assess associations between categorical variables (p<0.05). Results The majority of the parameters assessed (44%) showed slight (k=0.21 to 0.40) and/or poor agreement (k<0.20), with low values of negative specific agreement. The best agreement was observed between waist circumference and waist-to-height ratio both for the general population (k=0.88) and between sexes (k=0.93 to 0.86). There was a significant association (p<0.001) between the risk of cardiovascular diseases and females when using waist circumference and conicity index, and with males when using neck circumference. This resulted in a wide variation in the prevalence of cardiovascular disease risk (5.5%-36.5%), depending on the parameter and the sex that was assessed. Conclusion The results indicate variability in agreement in assessing risk for cardiovascular diseases, based on anthropometric parameters, and which also seems to be influenced by sex. Further studies in the Brazilian population are required to better understand this issue. PMID:26466060

  15. Mindfulness Based Programs Implemented with At-Risk Adolescents

    PubMed Central

    Rawlett, Kristen; Scrandis, Debra

    2016-01-01

    Objective: This review examines studies on mindfulness based programs used with adolescents at-risk for poor future outcomes such as not graduating from high school and living in poverty. Method: The keywords used include mindfulness, at-risk and adolescents in each database to search CINAHL (10 items: 2 book reviews, 3 Dissertations, and 5 research articles), Medline EBSCO (15 research articles), and PubMed (10 research articles). Only primary research articles published between 2009- 2015 in English on mindfulness and at-risk adolescents were included for the most current evidence. Results: Few studies (n= 11) were found that investigate mindfulness in at-risk adolescents. These studies used various mindfulness programs (n = 7) making it difficult to generalize findings for practice. Only three studies were randomized control trials focusing mostly on male students with low socioeconomic status and existing mental health diagnoses. Conclusion: There is a relationship between health behaviors and academic achievement. Future research studies on mindfulness based interventions need to expand to its effects on academic achievement in those youth at-risk to decrease problematic behaviors and improve their ability to be successful adults. PMID:27347259

  16. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density

  17. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants.

    PubMed

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R; Sresty, N V Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-07-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants.

  18. Model based climate information on drought risk in Africa

    NASA Astrophysics Data System (ADS)

    Calmanti, S.; Syroka, J.; Jones, C.; Carfagna, F.; Dell'Aquila, A.; Hoefsloot, P.; Kaffaf, S.; Nikulin, G.

    2012-04-01

    The United Nations World Food Programme (WFP) has embarked upon the endeavor of creating a sustainable Africa-wide natural disaster risk management system. A fundamental building block of this initiative is the setup of a drought impact modeling platform called Africa Risk-View that aims to quantify and monitor weather-related food security risk in Africa. The modeling approach is based the Water Requirement Satisfaction Index (WRSI), as the fundamental indicator of the performances of agriculture and uses historical records of food assistance operation to project future potential needs for livelihood protection. By using climate change scenarios as an input to Africa Risk-View it is possible, in principles, to evaluate the future impact of climate variability on critical issues such as food security and the overall performance of the envisaged risk management system. A necessary preliminary step to this challenging task is the exploration of the sources of uncertainties affecting the assessment based on modeled climate change scenarios. For this purpose, a limited set of climate models have been selected in order verify the relevance of using climate model output data with Africa Risk-View and to explore a minimal range of possible sources of uncertainty. This first evaluation exercise started before the setup of the CORDEX framework and has relied on model output available at the time. In particular only one regional downscaling was available for the entire African continent from the ENSEMBLES project. The analysis shows that current coarse resolution global climate models can not directly feed into the Africa RiskView risk-analysis tool. However, regional downscaling may help correcting the inherent biases observed in the datasets. Further analysis is performed by using the first data available under the CORDEX framework. In particular, we consider a set of simulation driven with boundary conditions from the reanalysis ERA-Interim to evaluate the skill drought

  19. Risk-based monitored natural attenuation--a case study.

    PubMed

    Khan, F I; Husain, T

    2001-08-17

    The term "monitored natural attenuation" (MNA) refers to a reliance on natural attenuation (NA) processes for remediation through the careful monitoring of the behavior of a contaminant source in time and space domains. In recent years, policymakers are shifting to a risk-based approach where site characteristics are measured against the potential risk to human health and the environment, and site management strategies are prioritized to be commensurate with that risk. Risk-based corrective action (RBCA), a concept developed by the American Society for Testing Materials (ASTM), was the first indication of how this approach could be used in the development of remediation strategies. This paper, which links ASTM's RBCA approach with MNA, develops a systematic working methodology for a risk-based site evaluation and remediation through NA. The methodology is comprised of seven steps, with the first five steps intended to evaluate site characteristics and the feasibility of NA. If NA is effective, then the last two steps will guide the development of a long-term monitoring plan and approval for a site closure. This methodology is used to evaluate a site contaminated with oil from a pipeline spill. The case study concluded that the site has the requisite characteristics for NA, but it would take more than 80 years for attenuation of xylene and ethylbenzene, as these chemicals appear in the pure phase. If fast remediation is sought, then efforts should be made to remove the contaminant from the soil. Initially, the site posed a serious risk to both on-site and off-site receptors, but it becomes acceptable after 20 years, as the plume is diluted and drifts from its source of origin.

  20. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  1. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  2. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  3. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  4. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  5. 12 CFR 1022.73 - Content, form, and timing of risk-based pricing notices.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Content, form, and timing of risk-based pricing... REPORTING (REGULATION V) Duties of Users Regarding Risk-Based Pricing § 1022.73 Content, form, and timing of risk-based pricing notices. (a) Content of the notice—(1) In general. The risk-based pricing...

  6. 12 CFR 1022.73 - Content, form, and timing of risk-based pricing notices.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Content, form, and timing of risk-based pricing... REPORTING (REGULATION V) Duties of Users Regarding Risk-Based Pricing § 1022.73 Content, form, and timing of risk-based pricing notices. (a) Content of the notice. (1) In general. The risk-based pricing...

  7. 16 CFR 640.3 - General requirements for risk-based pricing notices.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false General requirements for risk-based pricing... DUTIES OF CREDITORS REGARDING RISK-BASED PRICING § 640.3 General requirements for risk-based pricing... notice (“risk-based pricing notice”) in the form and manner required by this part if the person both—...

  8. 16 CFR 640.4 - Content, form, and timing of risk-based pricing notices.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Content, form, and timing of risk-based... REPORTING ACT DUTIES OF CREDITORS REGARDING RISK-BASED PRICING § 640.4 Content, form, and timing of risk-based pricing notices. (a) Content of the notice—(1) In general. The risk-based pricing notice...

  9. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how

  10. Projected Flood Risks in China based on CMIP5

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    2016-04-01

    Based on the simulations from 22 CMIP5 models and in combination with data on population, GDP, arable land, and terrain elevation, the spatial distributions of the flood risk levels are calculated and analyzed under RCP8.5 for the baseline period (1986-2005), the near term future period (2016-2035), the middle term future period (2046-2065), and the long term future period (2080-2099). (1) Areas with higher flood hazard risk levels in the future are concentrated in southeastern China, and the areas with the risk level III continue to expand. The major changes in flood hazard risks will occur in the middle and long term future. (2) In future, the areas of high vulnerability to flood hazards will be located in China's eastern region. In the middle and late 21st century, the extent of the high vulnerability area will expand eastward and its intensity will gradually increase. The highest vulnerability values are found in the provinces of Beijing, Tianjin, Hebei, Henan, Anhui, Shandong, Shanghai, Jiangsu, and in parts of the Pearl River Delta. Furthermore, the major cities in northeast China, as well as Wuhan, Changsha and Nanchang are highly vulnerable. (3) The regions with high flood risk levels will be located in eastern China, in the middle and lower reaches of Yangtze River and stretching northward to Beijing and Tianjin. High-risk flood areas are also occurring in major cities in Northeast China, in some parts of Shaanxi and Shanxi, and in some coastal areas in Southeast China. (4) Compared to the baseline period, the high flood risks will increase on a regional level towards the end of the 21st century, although the areas of flood hazards show little variation. In this paper, the projected future flood risks for different periods were analyzed under the RCP8.5 emission scenarios. By comparing the results with the simulations under the RCP 2.6 and RCP 4.5 scenarios, both scenarios show no differences in the spatial distribution, but in the intensity of flood

  11. Familial risk of cerebral palsy: population based cohort study

    PubMed Central

    Wilcox, Allen J; Lie, Rolv T; Moster, Dag

    2014-01-01

    Objective To investigate risks of recurrence of cerebral palsy in family members with various degrees of relatedness to elucidate patterns of hereditability. Design Population based cohort study. Setting Data from the Medical Birth Registry of Norway, linked to the Norwegian social insurance scheme to identify cases of cerebral palsy and to databases of Statistics Norway to identify relatives. Participants 2 036 741 Norwegians born during 1967-2002, 3649 of whom had a diagnosis of cerebral palsy; 22 558 pairs of twins, 1 851 144 pairs of first degree relatives, 1 699 856 pairs of second degree relatives, and 5 165 968 pairs of third degree relatives were identified. Main outcome measure Cerebral palsy. Results If one twin had cerebral palsy, the relative risk of recurrence of cerebral palsy was 15.6 (95% confidence interval 9.8 to 25) in the other twin. In families with an affected singleton child, risk was increased 9.2 (6.4 to 13)-fold in a subsequent full sibling and 3.0 (1.1 to 8.6)-fold in a half sibling. Affected parents were also at increased risk of having an affected child (6.5 (1.6 to 26)-fold). No evidence was found of differential transmission through mothers or fathers, although the study had limited power to detect such differences. For people with an affected first cousin, only weak evidence existed for an increased risk (1.5 (0.9 to 2.7)-fold). Risks in siblings or cousins were independent of sex of the index case. After exclusion of preterm births (an important risk factor for cerebral palsy), familial risks remained and were often stronger. Conclusions People born into families in which someone already has cerebral palsy are themselves at elevated risk, depending on their degree of relatedness. Elevated risk may extend even to third degree relatives (first cousins). The patterns of risk suggest multifactorial inheritance, in which multiple genes interact with each other and with environmental factors. These data offer additional

  12. A quantile-based Time at Risk: A new approach for assessing risk in financial markets

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam; Raei, Reza

    2013-11-01

    In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.

  13. Roundness error assessment based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Chen, G. Q.

    2005-01-01

    Roundness error assessment is always a nonlinear optimization problem without constraints. The method of particle swarm optimization (PSO) is proposed to evaluate the roundness error. PSO is an evolution algorithm derived from the behavior of preying birds. PSO regards each feasible solution as a particle (point in n-dimensional space). It initializes a swarm of random particles in the feasible region. All particles always trace two particles in which one is the best position itself; another is the best position of all particles. According to the inertia weight and two best particles, all particles update their positions and velocities according to the fitness function. After iterations, it converges to an optimized solution. The reciprocal of the error assessment objective function is adopted as the fitness. In this paper the calculating procedures with PSO are given. Finally, an assessment example is used to verify this method. The results show that the method proposed provides a new way for other form and position error assessment because it can always converge to the global optimal solution.

  14. Efficiency Mode of Energy Management based on Optimal Flight Path

    NASA Astrophysics Data System (ADS)

    Yang, Ling-xiao

    2016-07-01

    One new method of searching the optimal flight path in target function is put forward, which is applied to energy section for reentry flight vehicle, and the optimal flight path in which the energy is managed to decline rapidly, is settled by this design. The research for energy management is meaningful for engineering, it can also improve the applicability and flexibility for vehicle. The angle-of-attack and the bank angle are used to regulate energy and range at unpowered reentry flight as control variables. Firstly, the angle-of-attack section for minimum lift-to-drag ratio is ensured by the relation of range and lift-to-drag ratio. Secondly, build the secure boundary for flight corridor by restrictions in flight. Thirdly, the D-e section is optimized for energy expending in corridor by the influencing rule of the D-e section and range. Finally, compare this design method with the traditional Pseudo-spectral method. Moreover, energy-managing is achieved by cooperating lateral motion, and the optimized D-e section is tracked to prove the practicability of programming flight path with energy management.

  15. Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication

    ERIC Educational Resources Information Center

    Wolf, Michael Maclean

    2009-01-01

    Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…

  16. Lessons in risk- versus resilience-based design and management.

    PubMed

    Park, Jeryang; Seager, Thomas P; Rao, P Suresh C

    2011-07-01

    The implications of recent catastrophic disasters, including the Fukushima Daiichi nuclear power plant accident, reach well beyond the immediate, direct environmental and human health risks. In a complex coupled system, disruptions from natural disasters and man-made accidents can quickly propagate through a complex chain of networks to cause unpredictable failures in other economic or social networks and other parts of the world. Recent disasters have revealed the inadequacy of a classical risk management approach. This study calls for a new resilience-based design and management paradigm that draws upon the ecological analogues of diversity and adaptation in response to low-probability and high-consequence disruptions.

  17. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  18. Optimized dynamic framing for PET-based myocardial blood flow estimation

    NASA Astrophysics Data System (ADS)

    Kolthammer, Jeffrey A.; Muzic, Raymond F.

    2013-08-01

    An optimal experiment design methodology was developed to select the framing schedule to be used in dynamic positron emission tomography (PET) for estimation of myocardial blood flow using 82Rb. A compartment model and an arterial input function based on measured data were used to calculate a D-optimality criterion for a wide range of candidate framing schedules. To validate the optimality calculation, noisy time-activity curves were simulated, from which parameter values were estimated using an efficient and robust decomposition of the estimation problem. D-optimized schedules improved estimate precision compared to non-optimized schedules, including previously published schedules. To assess robustness, a range of physiologic conditions were simulated. Schedules that were optimal for one condition were nearly-optimal for others. The effect of infusion duration was investigated. Optimality was better for shorter than for longer tracer infusion durations, with the optimal schedule for the shortest infusion duration being nearly optimal for other durations. Together this suggests that a framing schedule optimized for one set of conditions will also work well for others and it is not necessary to use different schedules for different infusion durations or for rest and stress studies. The method for optimizing schedules is general and could be applied in other dynamic PET imaging studies.

  19. Water risk assessment for river basins in China based on WWF water risk assessment tools

    NASA Astrophysics Data System (ADS)

    Wei, N.; Qiu, Y.; Gan, H.; Niu, C.; Liu, J.; Gan, Y.; Zhou, N.

    2014-09-01

    Water resource problems, one of the most important environmental and socio-economic issues, have been a common concern worldwide in recent years. Water resource risks are attracting more and more attention from the international community and national governments. Given the current situations of water resources and the water environment, and the characteristics of water resources management and information statistics of China, this paper establishes an index system for water risk assessment in river basins of China based on the index system of water risk assessment proposed by the World Wide Fund For Nature (WWF) and German Investment and Development Co., Ltd (DEG). The new system is more suitable for Chinese national conditions and endorses the international assessment index. A variety of factors are considered to determine the critical values of classification for each index, and the indexes are graded by means of 5-grade and 5-score scales; the weights and calculation methods of some indexes are adjusted, with the remaining indexes adopting the method of WWF. The Weighted Comprehensive Index Summation Process is adopted to calculate the integrated assessment score of the river basin. The method is applied to the Haihe River basin in China. The assessment shows that the method can accurately reflect the water risk level of different river basins. Finally, the paper discusses the continuing problems in water risk assessment and points out the research required to provide a reference for further study in this field.

  20. Optimization of industrial structure based on water environmental carrying capacity in Tieling City.

    PubMed

    Yue, Qiang; Hou, Limin; Wang, Tong; Wang, Liusuo; Zhu, Yue; Wang, Xiu; Cheng, Xilei

    2015-01-01

    A system dynamics optimization model of the industrial structure of Tieling City based on water environmental carrying capacity has been established. This system is divided into the following subsystems: water resources, economics, population, contaminants, and agriculture and husbandry. Three schemes were designed to simulate the model from 2011 to 2020, and these schemes were compared to obtain an optimal social and economic development model in Tieling City. Policy recommendations on industrial structure optimization based on the optimal solution provide scientific decision-making advice to develop a strong and sustainable economy in Tieling.

  1. Optimization of industrial structure based on water environmental carrying capacity in Tieling City.

    PubMed

    Yue, Qiang; Hou, Limin; Wang, Tong; Wang, Liusuo; Zhu, Yue; Wang, Xiu; Cheng, Xilei

    2015-01-01

    A system dynamics optimization model of the industrial structure of Tieling City based on water environmental carrying capacity has been established. This system is divided into the following subsystems: water resources, economics, population, contaminants, and agriculture and husbandry. Three schemes were designed to simulate the model from 2011 to 2020, and these schemes were compared to obtain an optimal social and economic development model in Tieling City. Policy recommendations on industrial structure optimization based on the optimal solution provide scientific decision-making advice to develop a strong and sustainable economy in Tieling. PMID:25909738

  2. [Dyslipidaemia and vascular risk. A new evidence based review].

    PubMed

    Pallarés-Carratalá, V; Pascual-Fuster, V; Godoy-Rocatí, D

    2015-01-01

    Dyslipidaemia is one of the major risk factors for ischaemic heart disease, the leading cause of death worldwide. Early detection and therapeutic intervention are key elements in the adequate prevention of cardiovascular disease. It is essential to have knowledge of the therapeutic arsenal available for their appropriate use in each of the clinical situations that might be presented in our patients. In the past 3 years, there has been a proliferation of multiple guidelines for the clinical management of patients with dyslipidaemia, with apparent contradictory messages regarding the achievement of the control objectives, which are confusing clinicians. This review aims to provide an updated overview of the situation as regards dyslipidaemia, based on the positioning of both European and American guidelines, through different risk situations and ending with the concept of atherogenic dyslipidaemia as a recognized cardiovascular risk factor.

  3. Assessing risk based on uncertain avalanche activity patterns

    NASA Astrophysics Data System (ADS)

    Zeidler, Antonia; Fromm, Reinhard

    2015-04-01

    Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables

  4. Gray matter correlates of dispositional optimism: a voxel-based morphometry study.

    PubMed

    Yang, Junyi; Wei, Dongtao; Wang, Kangcheng; Qiu, Jiang

    2013-10-11

    Dispositional optimism is an important product of human evolution. This individual difference variable plays a core role in human experience. Dispositional optimism is beneficial to physical and psychological wellbeing. Previous task-related neuroimaging studies on dispositional optimism were limited by small sample sizes, and did not examine individual differences in dispositional optimism related to brain structure. Thus, the current study used voxel-based morphometry and the revised Life Orientation Test to investigate individual dispositional optimism and its association with brain structure in 361 healthy participants. The results showed that individual dispositional optimism was associated with larger gray matter volume of a cluster of areas that included the left thalamus/left pulvinar that extended to the left parahippocampal gyrus. These findings suggest a biological basis for individual dispositional optimism, distributed across different gray matter regions of the brain.

  5. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-01-01

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data. PMID:25325152

  6. SU-E-T-617: A Feasibility Study of Navigation Based Multi Criteria Optimization for Advanced Cervical Cancer IMRT Planning

    SciTech Connect

    Ma, C

    2014-06-01

    Purpose: This study aims to validate multi-criteria optimization (MCO) against standard intensity modulated radiation therapy (IMRT) optimization for advanced cervical cancer in RayStation (v2.4, RaySearch Laboratories, Sweden). Methods: 10 advanced cervical cancer patients IMRT plans were randomly selected, these plans were designed with step and shoot optimization, new plans were then designed with MCO based on these plans,while keeping optimization conditions unchanged,comparison was made between both kinds of plans including the dose volume histogram parameters of PTV and OAR,and were analysed by pairing-t test. Results: We normalize the plan so that 95% volume of PTV achieved the prescribed dose(50Gy). The volume of radiation 10, 20, 30, and 40 Gy of the rectum were reduced by 14.7%,26.8%,21.1%,10.5% respectively(P≥0.05). The mean dose of rectum were reduced by 7.2Gy(P≤0.05). There were no significant differences for the dosimetric parameters for the bladder. Conclusion: In comparision with standard IMRT optimization, MCO reduces the dose of organs at risk with the same PTV coverage,but the result needs further clinical evalution.

  7. A Global Airport-Based Risk Model for the Spread of Dengue Infection via the Air Transport Network

    PubMed Central

    Gardner, Lauren; Sarkar, Sahotra

    2013-01-01

    The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases. PMID:24009672

  8. A global airport-based risk model for the spread of dengue infection via the air transport network.

    PubMed

    Gardner, Lauren; Sarkar, Sahotra

    2013-01-01

    The number of travel-acquired dengue infections has seen a consistent global rise over the past decade. An increased volume of international passenger air traffic originating from regions with endemic dengue has contributed to a rise in the number of dengue cases in both areas of endemicity and elsewhere. This paper reports results from a network-based risk assessment model which uses international passenger travel volumes, travel routes, travel distances, regional populations, and predictive species distribution models (for the two vector species, Aedes aegypti and Aedes albopictus) to quantify the relative risk posed by each airport in importing passengers with travel-acquired dengue infections. Two risk attributes are evaluated: (i) the risk posed by through traffic at each stopover airport and (ii) the risk posed by incoming travelers to each destination airport. The model results prioritize optimal locations (i.e., airports) for targeted dengue surveillance. The model is easily extendible to other vector-borne diseases.

  9. Risk based neoadjuvant chemotherapy in muscle invasive bladder cancer

    PubMed Central

    Jayaratna, Isuru S.; Navai, Neema

    2015-01-01

    Muscle invasive bladder cancer (MIBC) is an aggressive disease that frequently requires radical cystectomy (RC) to achieve durable cure rates. Surgery is most effective when performed in organ-confined disease, with the best outcomes for those patients with a pT0 result. The goals of neoadjuvant chemotherapy (NC) are to optimize surgical outcomes for a malignancy with limited adjuvant therapies and a lack of effective salvage treatments. Despite level 1 evidence demonstrating a survival benefit, the utilization of NC has been hampered by several issues, including, the inability to predict responders and the perception that NC may delay curative surgery. In this article, we review the current efforts to identify patients that are most likely to derive a benefit from NC, in order to create a risk-adapted paradigm that reserves NC for those who need it. PMID:26816830

  10. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  11. Optimization techniques for OpenCL-based linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Fox, Paul; Humphrey, John; Kuller, Aryeh; Kelmelis, Eric; Prather, Dennis W.

    2014-06-01

    The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS

  12. Optimal global value of information trials: better aligning manufacturer and decision maker interests and enabling feasible risk sharing.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2013-05-01

    Risk sharing arrangements relate to adjusting payments for new health technologies given evidence of their performance over time. Such arrangements rely on prospective information regarding the incremental net benefit of the new technology, and its use in practice. However, once the new technology has been adopted in a particular jurisdiction, randomized clinical trials within that jurisdiction are likely to be infeasible and unethical in the cases where they would be most helpful, i.e. with current evidence of positive while uncertain incremental health and net monetary benefit. Informed patients in these cases would likely be reluctant to participate in a trial, preferring instead to receive the new technology with certainty. Consequently, informing risk sharing arrangements within a jurisdiction is problematic given the infeasibility of collecting prospective trial data. To overcome such problems, we demonstrate that global trials facilitate trialling post adoption, leading to more complete and robust risk sharing arrangements that mitigate the impact of costs of reversal on expected value of information in jurisdictions who adopt while a global trial is undertaken. More generally, optimally designed global trials offer distinct advantages over locally optimal solutions for decision makers and manufacturers alike: avoiding opportunity costs of delay in jurisdictions that adopt; overcoming barriers to evidence collection; and improving levels of expected implementation. Further, the greater strength and translatability of evidence across jurisdictions inherent in optimal global trial design reduces barriers to translation across jurisdictions characteristic of local trials. Consequently, efficiently designed global trials better align the interests of decision makers and manufacturers, increasing the feasibility of risk sharing and the expected strength of evidence over local trials, up until the point that current evidence is globally sufficient.

  13. Optimal global value of information trials: better aligning manufacturer and decision maker interests and enabling feasible risk sharing.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2013-05-01

    Risk sharing arrangements relate to adjusting payments for new health technologies given evidence of their performance over time. Such arrangements rely on prospective information regarding the incremental net benefit of the new technology, and its use in practice. However, once the new technology has been adopted in a particular jurisdiction, randomized clinical trials within that jurisdiction are likely to be infeasible and unethical in the cases where they would be most helpful, i.e. with current evidence of positive while uncertain incremental health and net monetary benefit. Informed patients in these cases would likely be reluctant to participate in a trial, preferring instead to receive the new technology with certainty. Consequently, informing risk sharing arrangements within a jurisdiction is problematic given the infeasibility of collecting prospective trial data. To overcome such problems, we demonstrate that global trials facilitate trialling post adoption, leading to more complete and robust risk sharing arrangements that mitigate the impact of costs of reversal on expected value of information in jurisdictions who adopt while a global trial is undertaken. More generally, optimally designed global trials offer distinct advantages over locally optimal solutions for decision makers and manufacturers alike: avoiding opportunity costs of delay in jurisdictions that adopt; overcoming barriers to evidence collection; and improving levels of expected implementation. Further, the greater strength and translatability of evidence across jurisdictions inherent in optimal global trial design reduces barriers to translation across jurisdictions characteristic of local trials. Consequently, efficiently designed global trials better align the interests of decision makers and manufacturers, increasing the feasibility of risk sharing and the expected strength of evidence over local trials, up until the point that current evidence is globally sufficient. PMID:23529209

  14. XFEM schemes for level set based structural optimization

    NASA Astrophysics Data System (ADS)

    Li, Li; Wang, Michael Yu; Wei, Peng

    2012-12-01

    In this paper, some elegant extended finite element method (XFEM) schemes for level set method structural optimization are proposed. Firstly, two-dimension (2D) and three-dimension (3D) XFEM schemes with partition integral method are developed and numerical examples are employed to evaluate their accuracy, which indicate that an accurate analysis result can be obtained on the structural boundary. Furthermore, the methods for improving the computational accuracy and efficiency of XFEM are studied, which include the XFEM integral scheme without quadrature sub-cells and higher order element XFEM scheme. Numerical examples show that the XFEM scheme without quadrature sub-cells can yield similar accuracy of structural analysis while prominently reducing the time cost and that higher order XFEM elements can improve the computational accuracy of structural analysis in the boundary elements, but the time cost is increasing. Therefore, the balance of time cost between FE system scale and the order of element needs to be discussed. Finally, the reliability and advantages of the proposed XFEM schemes are illustrated with several 2D and 3D mean compliance minimization examples that are widely used in the recent literature of structural topology optimization. All numerical results demonstrate that the proposed XFEM is a promising structural analysis approach for structural optimization with the level set method.

  15. Optimization of fingernail sensor design based on fingernail imaging

    NASA Astrophysics Data System (ADS)

    Abu-Khalaf, Jumana M.; Mascaro, Stephen A.

    2010-08-01

    This paper describes the optimization of fingernail sensors for measuring fingertip touch forces for human-computer interaction. The fingernail sensor uses optical reflectance photoplethysmography to measure the change in blood perfusion in the fingernail bed when the fingerpad touches a surface with various forces. In the original fingernail sensor, color changes observed through the fingernail have been measured by mounting an array of six LEDs (Light Emitting Diodes) and eight photodetectors on the fingernail in a laterally symmetric configuration. The optical components were located such that each photodiode had at least one neighboring LED. The role of each of the photodetectors was investigated in terms of the effect of removing one or more photodetectors on force prediction estimation. The analysis suggested designing the next generation of fingernail sensors with less than eight photodetectors. This paper proposes an optimal redesign by analyzing a photographic catalog composed of six different force poses, representing average fingernail coloration patterns of fifteen human subjects. It also introduces an optical model that describes light transmission between an LED and a photodiode, and predicts the optimal locations of the optoelectronic devices in the fingernail area.

  16. Sleep disturbances as an evidence-based suicide risk factor.

    PubMed

    Bernert, Rebecca A; Kim, Joanne S; Iwata, Naomi G; Perlis, Michael L

    2015-03-01

    Increasing research indicates that sleep disturbances may confer increased risk for suicidal behaviors, including suicidal ideation, suicide attempts, and death by suicide. Despite increased investigation, a number of methodological problems present important limitations to the validity and generalizability of findings in this area, which warrant additional focus. To evaluate and delineate sleep disturbances as an evidence-based suicide risk factor, a systematic review of the extant literature was conducted with methodological considerations as a central focus. The following methodologic criteria were required for inclusion: the report (1) evaluated an index of sleep disturbance; (2) examined an outcome measure for suicidal behavior; (3) adjusted for presence of a depression diagnosis or depression severity, as a covariate; and (4) represented an original investigation as opposed to a chart review. Reports meeting inclusion criteria were further classified and reviewed according to: study design and timeframe; sample type and size; sleep disturbance, suicide risk, and depression covariate assessment measure(s); and presence of positive versus negative findings. Based on keyword search, the following search engines were used: PubMed and PsycINFO. Search criteria generated N = 82 articles representing original investigations focused on sleep disturbances and suicide outcomes. Of these, N = 18 met inclusion criteria for review based on systematic analysis. Of the reports identified, N = 18 evaluated insomnia or poor sleep quality symptoms, whereas N = 8 assessed nightmares in association with suicide risk. Despite considerable differences in study designs, samples, and assessment techniques, the comparison of such reports indicates preliminary, converging evidence for sleep disturbances as an empirical risk factor for suicidal behaviors, while highlighting important, future directions for increased investigation. PMID:25698339

  17. Enhancing the effectiveness of IST through risk-based techniques

    SciTech Connect

    Floyd, S.D.

    1996-12-01

    Current IST requirements were developed mainly through deterministic-based methods. While this approach has resulted in an adequate level of safety and reliability for pumps and valves, insights from probabilistic safety assessments suggest a better safety focus can be achieved at lower costs. That is, some high safety impact pumps and valves are currently not tested under the IST program and should be added, while low safety impact valves could be tested at significantly greater intervals than allowed by the current IST program. The nuclear utility industry, through the Nuclear Energy Institute (NEI), has developed a draft guideline for applying risk-based techniques to focus testing on those pumps and valves with a high safety impact while reducing test frequencies on low safety impact pumps and valves. The guideline is being validated through an industry pilot application program that is being reviewed by the U.S. Nuclear Regulatory Commission. NEI and the ASME maintain a dialogue on the two groups` activities related to risk-based IST. The presenter will provide an overview of the NEI guideline, discuss the methodological approach for applying risk-based technology to IST and provide the status of the industry pilot plant effort.

  18. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  19. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  20. Foraging on the potential energy surface: a swarm intelligence-based optimizer for molecular geometry.

    PubMed

    Wehmeyer, Christoph; Falk von Rudorff, Guido; Wolf, Sebastian; Kabbe, Gabriel; Schärf, Daniel; Kühne, Thomas D; Sebastiani, Daniel

    2012-11-21

    We present a stochastic, swarm intelligence-based optimization algorithm for the prediction of global minima on potential energy surfaces of molecular cluster structures. Our optimization approach is a modification of the artificial bee colony (ABC) algorithm which is inspired by the foraging behavior of honey bees. We apply our modified ABC algorithm to the problem of global geometry optimization of molecular cluster structures and show its performance for clusters with 2-57 particles and different interatomic interaction potentials. PMID:23181297