Science.gov

Sample records for optimization approach applied

  1. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  2. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  3. Macronutrient modifications of optimal foraging theory: an approach using indifference curves applied to some modern foragers

    SciTech Connect

    Hill, K.

    1988-06-01

    The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.

  4. Optimal control of open quantum systems: a combined surrogate hamiltonian optimal control theory approach applied to photochemistry on surfaces.

    PubMed

    Asplund, Erik; Klüner, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = m(e) = e = a(0) = 1, have been used unless otherwise stated.

  5. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    SciTech Connect

    Asplund, Erik; Kluener, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ({Dirac_h}/2{pi})=m{sub e}=e=a{sub 0}= 1, have been used unless otherwise stated.

  6. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    PubMed

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  7. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  8. Solid mining residues from ni extraction applied as nutrients supplier to anaerobic process: optimal dose approach through Taguchi's methodology.

    PubMed

    Pereda, I; Irusta, R; Montalvo, S; del Valle, J L

    2006-01-01

    The use of solid mining residues (Cola) which contain a certain amount of Ni, Fe and Co, to stimulate anaerobic processes was evaluated. The effect over methane production and chemical oxygen demand (COD) removal efficiency was analysed. The studies were carried out in discontinuous reactors at lab scale under mesophilic conditions until exhausted. 0, 3, 5 and 7 mg Cola l(-1) doses were applied to synthetic wastewater. Volatile fatty acids (VFA) and sucrose were used as substrate, sulphur and nitrogen concentration, being the noise variable. Cola addition at dose around 5 mg I(-1), turned out to be stimulating for the anaerobic process. It was the factor that most influenced on methane production rate together with VFA and high content of volatile suspended solids. In the case of methane yield, pH was the control factor of strongest influence. Higher values of COD removal efficiency were obtained when the reactors were operated with sucrose at relatively low pH and at the smallest concentration of nitrogen and sulphur. Solid residues dose and the type of substrate were the factors that had most influence on COD removal efficiency.

  9. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  10. In silico optimization of pharmacokinetic properties and receptor binding affinity simultaneously: a 'parallel progression approach to drug design' applied to β-blockers.

    PubMed

    Advani, Poonam; Joseph, Blessy; Ambre, Premlata; Pissurlenkar, Raghuvir; Khedkar, Vijay; Iyer, Krishna; Gabhe, Satish; Iyer, Radhakrishnan P; Coutinho, Evans

    2016-01-01

    The present work exploits the potential of in silico approaches for minimizing attrition of leads in the later stages of drug development. We propose a theoretical approach, wherein 'parallel' information is generated to simultaneously optimize the pharmacokinetics (PK) and pharmacodynamics (PD) of lead candidates. β-blockers, though in use for many years, have suboptimal PKs; hence are an ideal test series for the 'parallel progression approach'. This approach utilizes molecular modeling tools viz. hologram quantitative structure activity relationships, homology modeling, docking, predictive metabolism, and toxicity models. Validated models have been developed for PK parameters such as volume of distribution (log Vd) and clearance (log Cl), which together influence the half-life (t1/2) of a drug. Simultaneously, models for PD in terms of inhibition constant pKi have been developed. Thus, PK and PD properties of β-blockers were concurrently analyzed and after iterative cycling, modifications were proposed that lead to compounds with optimized PK and PD. We report some of the resultant re-engineered β-blockers with improved half-lives and pKi values comparable with marketed β-blockers. These were further analyzed by the docking studies to evaluate their binding poses. Finally, metabolic and toxicological assessment of these molecules was done through in silico methods. The strategy proposed herein has potential universal applicability, and can be used in any drug discovery scenario; provided that the data used is consistent in terms of experimental conditions, endpoints, and methods employed. Thus the 'parallel progression approach' helps to simultaneously fine-tune various properties of the drug and would be an invaluable tool during the drug development process.

  11. Multidisciplinary optimization applied to a transport aircraft

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Wrenn, G. A.

    1984-01-01

    Decomposition of a large optimization problem into several smaller subproblems has been proposed as an approach to making large-scale optimization problems tractable. To date, the characteristics of this approach have been tested on problems of limited complexity. The objective of the effort is to demonstrate the application of this multilevel optimization method on a large-scale design study using analytical models comparable to those currently being used in the aircraft industry. The purpose of the design study which is underway to provide this demonstration is to generate a wing design for a transport aircraft which will perform a specified mission with minimum block fuel. A definition of the problem; a discussion of the multilevel composition which is used for an aircraft wing; descriptions of analysis and optimization procedures used at each level; and numerical results obtained to date are included. Computational times required to perform various steps in the process are also given. Finally, a summary of the current status and plans for continuation of this development effort are given.

  12. Computational methods applied to wind tunnel optimization

    NASA Astrophysics Data System (ADS)

    Lindsay, David

    This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping

  13. Applying Research Evidence to Optimize Telehomecare

    PubMed Central

    Bowles, Kathryn H.; Baugh, Amy C.

    2010-01-01

    Telemedicine is the use of technology to provide healthcare over a distance. Telehomecare, a form of telemedicine based in the patient's home, is a communication and clinical information system that enables the interaction of voice, video, and health-related data using ordinary telephone lines. Most home care agencies are adopting telehomecare to assist with the care of the growing population of chronically ill adults. This article presents a summary and critique of the published empirical evidence about the effects of telehomecare on older adult patients with chronic illness. The knowledge gained will be applied in a discussion regarding telehomecare optimization and areas for future research. The referenced literature in PubMed, MEDLINE, CDSR, ACP Journal Club, DARE, CCTR, and CINAHL databases was searched for the years 1995–2005 using the keywords “telehomecare” and “telemedicine,” and limited to primary research and studies in English. Approximately 40 articles were reviewed. Articles were selected if telehealth technology with peripheral medical devices was used to deliver home care for adult patients with chronic illness. Studies where the intervention consisted of only telephone calls or did not involve video or in-person nurse contact in the home were excluded. Nineteen studies described the effects of telehomecare on adult patients, chronic illness outcomes, providers, and costs of care. Patients and providers were accepting of the technology and it appears to have positive effects on chronic illness outcomes such as self-management, rehospitalizations, and length of stay. Overall, due to savings from healthcare utilization and travel, telehomecare appears to reduce healthcare costs. Generally, studies have small sample sizes with diverse types and doses of telehomecare intervention for a select few chronic illnesses; most commonly heart failure. Very few published studies have explored the cost or quality implications since the change in home

  14. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  15. Applying optimization software libraries to engineering problems

    NASA Technical Reports Server (NTRS)

    Healy, M. J.

    1984-01-01

    Nonlinear programming, preliminary design problems, performance simulation problems trajectory optimization, flight computer optimization, and linear least squares problems are among the topics covered. The nonlinear programming applications encountered in a large aerospace company are a real challenge to those who provide mathematical software libraries and consultation services. Typical applications include preliminary design studies, data fitting and filtering, jet engine simulations, control system analysis, and trajectory optimization and optimal control. Problem sizes range from single-variable unconstrained minimization to constrained problems with highly nonlinear functions and hundreds of variables. Most of the applications can be posed as nonlinearly constrained minimization problems. Highly complex optimization problems with many variables were formulated in the early days of computing. At the time, many problems had to be reformulated or bypassed entirely, and solution methods often relied on problem-specific strategies. Problems with more than ten variables usually went unsolved.

  16. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  17. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  18. Assessment of Optimal Interrogation Approaches

    DTIC Science & Technology

    2007-05-01

    7540 Pickens Avenue Fort Jackson, SC 29207 DACA DACA08-R-0001 Public Release In March 2006, Department of Defense Polygraph Institute (DoDPI) [now the...Defense Academy for Credibility Assessment ( DACA )] Research Division requested research to determine the optimal approaches or techniques used by an...interrogator. Specifically, DACA wanted the researchers to gather information from "expert" interrogators (referred to as "superior" interrogators

  19. Remediation Optimization: Definition, Scope and Approach

    EPA Pesticide Factsheets

    This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.

  20. A General Approach to Error Estimation and Optimized Experiment Design, Applied to Multislice Imaging of T1in Human Brain at 4.1 T

    NASA Astrophysics Data System (ADS)

    Mason, Graeme F.; Chu, Wen-Jang; Hetherington, Hoby P.

    1997-05-01

    In this report, a procedure to optimize inversion-recovery times, in order to minimize the uncertainty in the measuredT1from 2-point multislice images of the human brain at 4.1 T, is discussed. The 2-point, 40-slice measurement employed inversion-recovery delays chosen based on the minimization of noise-based uncertainties. For comparison of the measuredT1values and uncertainties, 10-point, 3-slice measurements were also acquired. The measuredT1values using the 2-point method were 814, 1361, and 3386 ms for white matter, gray matter, and cerebral spinal fluid, respectively, in agreement with the respectiveT1values of 817, 1329, and 3320 ms obtained using the 10-point measurement. The 2-point, 40-slice method was used to determine theT1in the cortical gray matter, cerebellar gray matter, caudate nucleus, cerebral peduncle, globus pallidus, colliculus, lenticular nucleus, base of the pons, substantia nigra, thalamus, white matter, corpus callosum, and internal capsule.

  1. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    SciTech Connect

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  2. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  3. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  4. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  5. Optimizing IT Infrastructure by Virtualization Approach

    NASA Astrophysics Data System (ADS)

    Budiman, Thomas; Suroso, Jarot S.

    2017-04-01

    The goal of this paper is to get the best potential configuration which can be applied to a physical server without compromising service performance for the clients. Data were compiled by direct observation in the data center observed. Data was then analyzed using the hermeneutics approach to understand the condition by textual data gathered understanding. The results would be the best configuration for a physical server which contains several virtual machines logically separated by its functions. It can be concluded that indeed one physical server machine can be optimized using virtualization so that it may deliver the peak performance of the machine itself and the impact are throughout the organization.

  6. Applying a managerial approach to day surgery.

    PubMed

    Onetti, Alberto

    2008-01-01

    The present article explores the day surgery topic assuming a managerial perspective. If we assume such a perspective, day surgery can be considered as a business model decision care and not just a surgical procedure alternative to the traditional ones requiring patient hospitalization. In this article we highlight the main steps required to develop a strategic approach [Cotta Ramusino E, Onetti A. Strategia d'Impresa. Milano; Il Sole 24 Ore; Second Edition, 2007] at hospital level (Onetti A, Greulich A. Strategic management in hospitals: the balanced scorecard approach. Milano: Giuffé; 2003) and to make day surgery part of it. It means understanding: - how and when day surgery can improve the health care providers' overall performance both in terms of clinical effectiveness and financial results, and, - how to organize and integrate it with the other hospital activities in order to make it work. Approaching day surgery as a business model decision requires to address in advance a list of potential issues and necessitates of continued audit to verify the results. If it does happen, day surgery can be both safe and cost effective and impact positively on surgical patient satisfaction. We propose a sort of "check-up list" useful to hospital managers and doctors that are evaluating the option of introducing day surgery or are trying to optimize it.

  7. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  8. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  9. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  10. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  11. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  12. A Multiple Approach to Evaluating Applied Academics.

    ERIC Educational Resources Information Center

    Wang, Changhua; Owens, Thomas

    The Boeing Company is involved in partnerships with Washington state schools in the area of applied academics. Over the past 3 years, Boeing offered grants to 57 high schools to implement applied mathematics, applied communication, and principles of technology courses. Part 1 of this paper gives an overview of applied academics by examining what…

  13. Applying discursive approaches to health psychology.

    PubMed

    Seymour-Smith, Sarah

    2015-04-01

    The aim of this paper is to outline the contribution of two strands of discursive research, glossed as 'macro' and 'micro,' to the field of health psychology. A further goal is to highlight some contemporary debates in methodology associated with the use of interview data versus more naturalistic data in qualitative health research. Discursive approaches provide a way of analyzing talk as a social practice that considers how descriptions are put together and what actions they achieve. A selection of recent examples of discursive research from one applied area of health psychology, studies of diet and obesity, are drawn upon in order to illustrate the specifics of both strands. 'Macro' discourse work in psychology incorporates a Foucauldian focus on the way that discourses regulate subjectivities, whereas the concept of interpretative repertoires affords more agency to the individual: both are useful for identifying the cultural context of talk. Both 'macro' and 'micro' strands focus on accountability to varying degrees. 'Micro' Discursive Psychology, however, pays closer attention to the sequential organization of constructions and focuses on naturalistic settings that allow for the inclusion of an analysis of the health professional. Diets are typically depicted as an individual responsibility in mainstream health psychology, but discursive research highlights how discourses are collectively produced and bound up with social practices. (c) 2015 APA, all rights reserved).

  14. Probabilistic-based approach to optimal filtering

    PubMed

    Hannachi

    2000-04-01

    The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system.

  15. A Unified Approach to Optimization

    DTIC Science & Technology

    2014-10-02

    and dynamic programming, logic-based Benders decomposition, and unification of exact and heuristic methods. The publications associated with this...Logic-Based Benders Decomposition Logic-based Benders decomposition (LBBD) has been used for some years to combine CP and MIP, usually by solving the...classical Benders decomposition, but can be any optimization problem. Benders cuts are generated by solving the inference dual of the subproblem

  16. Optimal control theory applied to fusion plasma thermal stabilization

    SciTech Connect

    Sager, G.; Maya, I.; Miley, G.H.

    1985-07-01

    Optimal control theory is applied to determine feedback control for thermal stability of a driven, subingnition tokamak controlled by fuel injection and additional heating. It was found that the simplifications of the plasma burn dynamics and the control figure of merit required for the synthesis of optimal feedback laws were valid. Control laws were determined which allowed thermal stability in plasmas subject to 10% offset in temperature. The minimum ignition margin (defined as the difference between ignition temperature and the subignition operating point) was found to be 0.95 keV, corresponding to steady state heating requirements of less than 2% of fusion power.

  17. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  18. Applying SF-Based Genre Approaches to English Writing Class

    ERIC Educational Resources Information Center

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  19. Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H

    2015-02-20

    In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications.

  20. Applying fuzzy clustering optimization algorithm to extracting traffic spatial pattern

    NASA Astrophysics Data System (ADS)

    Hu, Chunchun; Shi, Wenzhong; Meng, Lingkui; Liu, Min

    2009-10-01

    Traditional analytical methods for traffic information can't meet to need of intelligent traffic system. Mining value-add information can deal with more traffic problems. The paper exploits a new clustering optimization algorithm to extract useful spatial clustered pattern for predicting long-term traffic flow from macroscopic view. Considering the sensitivity of initial parameters and easy falling into local extreme in FCM algorithm, the new algorithm applies Particle Swarm Optimization method, which can discovery the globe optimal result, to the FCM algorithm. And the algorithm exploits the union of the clustering validity index and objective function of the FCM algorithm as the fitness function of the PSO algorithm. The experimental result indicates that it is effective and efficient. For fuzzy clustering of road traffic data, it can produce useful spatial clustered pattern. And the clustered centers represent the locations which have heavy traffic flow. Moreover, the parameters of the patterns can provide intelligent traffic system with assistant decision support.

  1. Quantum optimal control theory applied to transitions in diatomic molecules

    NASA Astrophysics Data System (ADS)

    Lysebo, Marius; Veseth, Leif

    2014-12-01

    Quantum optimal control theory is applied to control electric dipole transitions in a real multilevel system. The specific system studied in the present work is comprised of a multitude of hyperfine levels in the electronic ground state of the OH molecule. Spectroscopic constants are used to obtain accurate energy eigenstates and electric dipole matrix elements. The goal is to calculate the optimal time-dependent electric field that yields a maximum of the transition probability for a specified initial and final state. A further important objective was to study the detailed quantum processes that take place during such a prescribed transition in a multilevel system. Two specific transitions are studied in detail. The computed optimal electric fields as well as the paths taken through the multitude of levels reveal quite interesting quantum phenomena.

  2. An effective model for ergonomic optimization applied to a new automotive assembly line

    SciTech Connect

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  3. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  4. Tennis Coaching: Applying the Game Sense Approach

    ERIC Educational Resources Information Center

    Pill, Shane; Hewitt, Mitchell

    2017-01-01

    This article demonstrates the game sense approach for teaching tennis to novice players. In a game sense approach, learning is positioned within modified games to emphasize the way rules shape game behavior, tactical awareness, decision-making and the development of contextualized stroke mechanics.

  5. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  6. Applying a gaming approach to IP strategy.

    PubMed

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.

  7. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  8. Applying EGO to large dimensional optimizations: a wideband fragmented patch example

    NASA Astrophysics Data System (ADS)

    O'Donnell, Teresa H.; Southall, Hugh; Santarelli, Scott; Steyskal, Hans

    2010-04-01

    Efficient Global Optimization (EGO) minimizes expensive cost function evaluations by correlating evaluated parameter sets and respective solutions to model the optimization space. For optimizations requiring destructive testing or lengthy simulations, this computational overhead represents a desirable tradeoff. However, the inspection of the predictor space to determine the next evaluation point can be a time-intensive operation. Although DACE predictor evaluation may be conducted for limited parameters by exhaustive sampling, this method is not extendable to large dimensions. We apply EGO here to the 11-dimensional optimization of a wide-band fragmented patch antenna and present an alternative genetic algorithm approach for selecting the next evaluation point. We compare results achieved with EGO on this optimization problem to previous results achieved with a genetic algorithm.

  9. Optimal online learning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Solla, Sara A.; Winther, Ole

    1999-09-01

    A recently proposed Bayesian approach to online learning is applied to learning a rule defined as a noisy single layer perceptron. In the Bayesian online approach, the exact posterior distribution is approximated by a simple parametric posterior that is updated online as new examples are incorporated to the dataset. In the case of binary weights, the approximate posterior is chosen to be a biased binary distribution. The resulting online algorithm is shown to outperform several other online approaches to this problem.

  10. The Optimal Treatment Approach to Needs Assessment.

    ERIC Educational Resources Information Center

    Cox, Gary B.; And Others

    1979-01-01

    The Optimal Treatment approach to needs assessment consists of comparing the most desirable set of services for a client with the services actually received. Discrepancies due to unavailable resources are noted and aggregated across clients. Advantages and disadvantages of this and other needs assessment procedures are considered. (Author/RL)

  11. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  12. Reconstruction-model-based snake applied to optimal motion contour positioning

    NASA Astrophysics Data System (ADS)

    Sanson, Henri

    1995-04-01

    This paper addresses the problem of optimal positioning of a contour separating two moving regions using snake concepts. After a brief recall of classical snake methodology, an alternative approach is proposed, based on a reconstruction criterion for the regions delimited by the curve, and the use of parametric modeling of both the region textures and boundaries. A generic adaptive step gradient algorithm is formulated for solving the curve evolution problem, independently of the models used. The method is then more specifically applied to motion boundary localization, where the texture of mobile regions is reconstructed by motion compensation, using polynomial motion models. The generic optimization algorithm is applied to motion frontiers defined by B- spline curves. Detailed implementation of this method in this particular case is described, and considerations about its behavior are given. Some experimental results are finally reported, attesting the interest of the proposed approach.

  13. Preconcentration modeling for the optimization of a micro gas preconcentrator applied to environmental monitoring.

    PubMed

    Camara, Malick; Breuil, Philippe; Briand, Danick; Viricelle, Jean-Paul; Pijolat, Christophe; de Rooij, Nico F

    2015-04-21

    This paper presents the optimization of a micro gas preconcentrator (μ-GP) system applied to atmospheric pollution monitoring, with the help of a complete modeling of the preconcentration cycle. Two different approaches based on kinetic equations are used to illustrate the behavior of the micro gas preconcentrator for given experimental conditions. The need for high adsorption flow and heating rate and for low desorption flow and detection volume is demonstrated in this paper. Preliminary to this optimization, the preconcentration factor is discussed and a definition is proposed.

  14. Optimal cooperative control synthesis applied to a control-configured aircraft

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Innocenti, M.

    1984-01-01

    A multivariable control augmentation synthesis method is presented that is intended to enable the designer to directly optimize pilot opinion rating of the augmented system. The approach involves the simultaneous solution for the augmentation and predicted pilot's compensation via optimal control techniques. The methodology is applied to the control law synthesis for a vehicle similar to the AFTI F16 control-configured aircraft. The resulting dynamics, expressed in terms of eigenstructure and time/frequency responses, are presented with analytical predictions of closed loop tracking performance, pilot compensation, and other predictors of pilot acceptance.

  15. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  16. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  17. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary

  18. Global dynamic optimization approach to predict activation in metabolic pathways

    PubMed Central

    2014-01-01

    Background During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been succesfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. Results In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. Conclusions The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to

  19. Applying Soft Arc Consistency to Distributed Constraint Optimization Problems

    NASA Astrophysics Data System (ADS)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi

    The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.

  20. Optimization of a Solar Photovoltaic Applied to Greenhouses

    NASA Astrophysics Data System (ADS)

    Nakoul, Z.; Bibi-Triki, N.; Kherrous, A.; Bessenouci, M. Z.; Khelladi, S.

    The global energy consumption and in our country is increasing. The bulk of world energy comes from fossil fuels, whose reserves are doomed to exhaustion and are the leading cause of pollution and global warming through the greenhouse effect. This is not the case of renewable energy that are inexhaustible and from natural phenomena. For years, unanimously, solar energy is in the first rank of renewable energies .The study of energetic aspect of a solar power plant is the best way to find the optimum of its performances. The study on land with real dimensions requires a long time and therefore is very costly, and more results are not always generalizable. To avoid these drawbacks we opted for a planned study on computer only, using the software 'Matlab' by modeling different components for a better sizing and simulating all energies to optimize profitability taking into account the cost. The result of our work applied to sites of Tlemcen and Bouzareah led us to conclude that the energy required is a determining factor in the choice of components of a PV solar power plant.

  1. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  2. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  3. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  4. Blood platelet production: a novel approach for practical optimization.

    PubMed

    van Dijk, Nico; Haijema, René; van der Wal, Jan; Sibinga, Cees Smit

    2009-03-01

    The challenge of production and inventory management for blood platelets (PLTs) is the requirement to meet highly uncertain demands. Shortages are to be minimized, if not to be avoided at all. Overproduction, in turn, leads to high levels of outdating as PLTs have a limited "shelf life." Outdating is to be minimized for ethical and cost reasons. Operations research (OR) methodology was applied to the PLT inventory management problem. The problem can be formulated in a general mathematical form. To solve this problem, a five-step procedure was used. This procedure is based on a combination of two techniques, a mathematical technique called stochastic dynamic programming (SDP) and computer simulation. The approach identified an optimal production policy, leading to the computation of a simple and nearly optimal PLT production "order-up-to" rule. This rule prescribes a fixed order-up-to level for each day of the week. The approach was applied to a test study with actual data for a regional Dutch blood bank. The main finding in the test study was that outdating could be reduced from 15-20 percent to less than 0.1 percent with virtually no shortages. Blood group preferences and extending the shelf life of more than 5 days appeared to be of marginal effect. In this article the worlds of blood management and the mathematical discipline of OR are brought together for the optimization of blood PLT production. This leads to simple nearly optimal blood PLT production policies that are suitable for practical implementation.

  5. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  6. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  7. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  8. Optimal control theory applied to fusion plasma thermal stabilization

    SciTech Connect

    Sager, G.; Miley, G.; Maya, I.

    1985-01-01

    Many authors have investigated stability characteristics and performance of various burn control schemes. The work presented here represents the first application of optimal control theory to the problem of fusion plasma thermal stabilization. The objectives of this initial investigation were to develop analysis methods, demonstrate tractability, and present some preliminary results of optimal control theory in burn control research.

  9. Mixed finite element formulation applied to shape optimization

    NASA Technical Reports Server (NTRS)

    Rodrigues, Helder; Taylor, John E.; Kikuchi, Noboru

    1988-01-01

    The development presented introduces a general form of mixed formulation for the optimal shape design problem. The associated optimality conditions are easily obtained without resorting to highly elaborate mathematical developments. Also, the physical significance of the adjoint problem is clearly defined with this formulation.

  10. Optimizing Multicompression Approaches to Elasticity Imaging

    PubMed Central

    Du, Huini; Liu, Jie; Pellot-Barakat, Claire; Insana, Michael F.

    2009-01-01

    Breast lesion visibility in static strain imaging ultimately is noise limited. When correlation and related techniques are applied to estimate local displacements between two echo frames recorded before and after a small deformation, target contrast increases linearly with the amount of deformation applied. However, above some deformation threshold, decorrelation noise increases more than contrast such that lesion visibility is severely reduced. Multicompression methods avoid this problem by accumulating displacements from many small deformations to provide the same net increase in lesion contrast as one large deformation but with minimal decorrelation noise. Unfortunately, multicompression approaches accumulate echo noise (electronic and sampling) with each deformation step as contrast builds so that lesion visibility can be reduced again if the applied deformation increment is too small. This paper uses signal models and analysis techniques to develop multicompression strategies that minimize strain image noise. The analysis predicts that displacement variance is minimal in elastically homogeneous media when the applied strain increment is 0.0035. Predictions are verified experimentally with gelatin phantoms. For in vivo breast imaging, a strain increment as low as 0.0015 is recommended for minimum noise because of the greater elastic heterogeneity of breast tissue. PMID:16471435

  11. Nonlinear optimization approach for Fourier ptychographic microscopy.

    PubMed

    Zhang, Yongbing; Jiang, Weixin; Dai, Qionghai

    2015-12-28

    Fourier ptychographic microscopy (FPM) is recently proposed as a computational imaging method to bypass the limitation of the space-bandwidth product of the traditional optical system. It employs a sequence of low-resolution images captured under angularly varying illumination and applies the phase retrieval algorithm to iteratively reconstruct a wide-field, high-resolution image. In current FPM imaging system, system uncertainties, such as the pupil aberration of the employed optics, may significantly degrade the quality of the reconstruction. In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system by simultaneously considering the reconstruction and the system imperfections. Analytical expressions for the gradient of a squared-error metric with respect to the object and illumination allow joint optimization of the object and system parameters. The algorithm achieves superior reconstructions when the system parameters are inaccurately known or in the presence of noise and corrects the pupil aberrations simultaneously. Experiments on both synthetic and real captured data validate the effectiveness of the proposed method.

  12. Defining and applying a functionality approach to intellectual disability.

    PubMed

    Luckasson, R; Schalock, R L

    2013-07-01

    The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. The authors synthesise current literature on human functioning dimensions, systems of supports and approaches to outcomes evaluation for persons with intellectual disability (ID), and propose a functionality approach that encompasses a systems perspective towards understanding human functioning in ID. The approach includes human functioning dimensions, interactive systems of supports and human functioning outcomes. Based on this functionality approach the authors: (1) describe how such an approach can be applied to clinical functions related to defining ID, assessment, classification, supports planning and outcomes evaluation; and (2) discuss the impact of a functionality approach on professional practices in the field of ID. A functionality approach can increase focus on the integrative nature of human functioning, provide unified language, align clinical functions and encourage evidence-based practices. The approach incorporates a holistic view of human beings and their lives, and can positively affect supports provision and evaluation. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.

  13. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  14. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  15. Applying a Constructivist and Collaborative Methodological Approach in Engineering Education

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Gonzalez, Carina; Castilla, Ivan; Gonzalez, Evelio; Sigut, Jose

    2007-01-01

    In this paper, a methodological educational proposal based on constructivism and collaborative learning theories is described. The suggested approach has been successfully applied to a subject entitled "Computer Architecture and Engineering" in a Computer Science degree in the University of La Laguna in Spain. This methodology is…

  16. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  17. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  18. Stochastic real-time optimal control: A pseudospectral approach for bearing-only trajectory optimization

    NASA Astrophysics Data System (ADS)

    Ross, Steven M.

    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.

  19. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  20. Classical mechanics approach applied to analysis of genetic oscillators.

    PubMed

    Vasylchenkova, Anastasiia; Mraz, Miha; Zimic, Nikolaj; Moskon, Miha

    2016-04-05

    Biological oscillators present a fundamental part of several regulatory mechanisms that control the response of various biological systems. Several analytical approaches for their analysis have been reported recently. They are, however, limited to only specific oscillator topologies and/or to giving only qualitative answers, i.e., is the dynamics of an oscillator given the parameter space oscillatory or not. Here we present a general analytical approach that can be applied to the analysis of biological oscillators. It relies on the projection of biological systems to classical mechanics systems. The approach is able to provide us with relatively accurate results in the meaning of type of behaviour system reflects (i.e. oscillatory or not) and periods of potential oscillations without the necessity to conduct expensive numerical simulations. We demonstrate and verify the proposed approach on three different implementations of amplified negative feedback oscillator.

  1. Nearly optimal quantum control: an analytical approach

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Saxena, Avadh; Sinitsyn, Nikolai A.

    2017-09-01

    We propose nearly optimal control strategies for changing the states of a quantum system. We argue that quantum control optimization can be studied analytically within some protocol families that depend on a small set of parameters for optimization. This optimization strategy can be preferred in practice because it is physically transparent and does not lead to combinatorial complexity in multistate problems. As a demonstration, we design optimized control protocols that achieve switching between orthogonal states of a naturally biased quantum two-level system.

  2. Optimizing pharmaceutical reimbursement: one institution's approach.

    PubMed

    Loyd, Laurel M

    2006-11-01

    The importance of understanding the revenue cycle, reviewing the billing system for errors, and collaborating with other health system departments in maximizing pharmaceutical reimbursement, and the approach used at a large academic medical center to justify a reimbursement specialist and achieve this goal are discussed. Understanding the revenue cycle may enable pharmacy departments to make wise decisions about programs and services that maximize revenue recovery and meet patient needs. Parts of the revenue cycle that pharmacists can have a favorable effect on include claim denials/payment variances, regulatory changes, compliance, contracting, and price setting. Pharmaceutical reimbursement was increased substantially at one institution through a collaborative effort involving multiple departments and a reimbursement specialist who analyzed the revenue cycle, reviewed billing systems, and took steps to avoid or correct billing errors. Collaborating with members of key health system departments can help identify and resolve billing system errors that diminish revenue. Documenting efforts to increase revenue recovery can help justify adding personnel dedicated to reimbursement matters. Analyzing the revenue cycle can contribute to wise decision-making that optimizes pharmaceutical reimbursement.

  3. Applying Genetic Algorithms To Query Optimization in Document Retrieval.

    ERIC Educational Resources Information Center

    Horng, Jorng-Tzong; Yeh, Ching-Chang

    2000-01-01

    Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)

  4. Applying Genetic Algorithms To Query Optimization in Document Retrieval.

    ERIC Educational Resources Information Center

    Horng, Jorng-Tzong; Yeh, Ching-Chang

    2000-01-01

    Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)

  5. Neoliberal Optimism: Applying Market Techniques to Global Health.

    PubMed

    Mei, Yuyang

    2016-09-23

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  6. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  7. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  8. Major accident prevention through applying safety knowledge management approach.

    PubMed

    Kalatpour, Omid

    2016-01-01

    Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.

  9. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  10. Applying a weed risk assessment approach to GM crops.

    PubMed

    Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe

    2014-12-01

    Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.

  11. Applying riding-posture optimization on bicycle frame design.

    PubMed

    Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee

    2015-11-01

    Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle.

  12. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  13. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  14. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1222-1231, 2003

  15. Beyond parallax barriers: applying formal optimization methods to multilayer automultiscopic displays

    NASA Astrophysics Data System (ADS)

    Lanman, Douglas; Wetzstein, Gordon; Hirsch, Matthew; Heidrich, Wolfgang; Raskar, Ramesh

    2012-03-01

    This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second, we consider general multi-layer architectures containing three or more light-attenuating layers. This line of research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile applications. The Layered 3D display extends this approach to multi-layered displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.

  16. Applying optimal model selection in principal stratification for causal inference.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2013-05-20

    Noncompliance to treatment allocation is a key source of complication for causal inference. Efficacy estimation is likely to be compounded by the presence of noncompliance in both treatment arms of clinical trials where the intention-to-treat estimate provides a biased estimator for the true causal estimate even under homogeneous treatment effects assumption. Principal stratification method has been developed to address such posttreatment complications. The present work extends a principal stratification method that adjusts for noncompliance in two-treatment arms trials by developing model selection for covariates predicting compliance to treatment in each arm. We apply the method to analyse data from the Esprit study, which was conducted to ascertain whether unopposed oestrogen (hormone replacement therapy) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We adjust for noncompliance in both treatment arms under a Bayesian framework to produce causal risk ratio estimates for each principal stratum. For mild values of a sensitivity parameter and using separate predictors of compliance in each arm, principal stratification results suggested that compliance with hormone replacement therapy only would reduce the risk for death and myocardial reinfarction by about 47% and 25%, respectively, whereas compliance with either treatment would reduce the risk for death by 13% and reinfarction by 60% among the most compliant. However, the results were sensitive to the user-defined sensitivity parameter.

  17. A Collective Neurodynamic Approach to Constrained Global Optimization.

    PubMed

    Yan, Zheng; Fan, Jianchao; Wang, Jun

    2016-04-01

    Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush-Kuhn-Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.

  18. Examining the Bernstein global optimization approach to optimal power flow problem

    NASA Astrophysics Data System (ADS)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  19. Using Response Surface Methodology as an Approach to Understand and Optimize Operational Air Power

    DTIC Science & Technology

    2010-01-01

    Introduction to Taguchi Methodology . In Taguchi Methods : Proceedings of the 1988 European Conference, 1-14. London: Elsevier Applied Science. Box G. E. and N... Methodology As an Approach to Understand and Optimize Operational Air Power Marvin L. Simpson, Jr. Resit Unal Report Documentation Page Form...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Using Response Surface Methodology As an Approach to Understand and Optimize Operational Air Power

  20. Stochastic Optimal Control and Linear Programming Approach

    SciTech Connect

    Buckdahn, R.; Goreac, D.; Quincampoix, M.

    2011-04-15

    We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.

  1. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  2. Making Big Data, Safe Data: A Test Optimization Approach

    DTIC Science & Technology

    2016-06-15

    Test Optimization Approach Acquisition Research Program Graduate School of Business & Public Policy Naval Postgraduate School The research presented...Making Big Data, Safe Data: A Test Optimization Approach 15 June 2016 Ricardo Valerdi, Associate Professor Eddie Enhelder University of Arizona...potential knowledge gained about a complex system when performing robustness testing and faced with a set of constraints. In particular, this project was

  3. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  4. A linear programming approach for optimal contrast-tone mapping.

    PubMed

    Wu, Xiaolin

    2011-05-01

    This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

  5. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  6. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  7. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    PubMed

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2017-02-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  8. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  9. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  10. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  11. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  12. Single-nm resolution approach by applying DDRP and DDRM

    NASA Astrophysics Data System (ADS)

    Shibayama, Wataru; Shigaki, Shuhei; Takeda, Satoshi; Nakajima, Makoto; Sakamoto, Rikimaru

    2017-03-01

    EUV lithography has been desired as the leading technology for 1x or single nm half-pitch patterning. However, the source power, masks and resist materials still have critical issues for mass production. Especially in resist materials, RLS trade-off has been the key issue. To overcome this issue, we are suggesting Dry Development Rinse Process (DDRP) and Materials (DDRM) as the pattern collapse mitigation approach. This DDRM can perform not only as pattern collapse free materials for fine pitch, but also as the etching hard mask against bottom layer (spin on carbon : SOC). In this paper, we especially propose new approaches to achieve high resolution around hp1X nm L/S and single nm line patterning. Especially, semi iso 8nm line was successfully achieved with good LWR (2.5nm) and around 3 times aspect ratio. This single nm patterning technique also helped to enhance sensitivity about 33%. On the other hand, pillar patterning thorough CH pattern by applying DDRP also showed high resolution below 20nm pillar CD with good LCDU and high sensitivity. This new DDRP technology can be the promising approach not only for hp1Xnm level patterning but also single nm patterning in N7/N5 and beyond.

  13. Applying a Modified Triad Approach to Investigate Wastewater lines

    SciTech Connect

    Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.

    2006-07-01

    Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)

  14. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  15. Applying response surface methodology to optimize nimesulide permeation from topical formulation.

    PubMed

    Shahzad, Yasser; Afreen, Urooj; Nisar Hussain Shah, Syed; Hussain, Talib

    2013-01-01

    Nimesulide is a non-steroidal anti-inflammatory drug that acts through selective inhibition of COX-2 enzyme. Poor bioavailability of this drug may leads to local toxicity at the site of aggregation and hinders reaching desired therapeutic effects. This study aimed at formulating and optimizing topically applied lotions of nimesulide using an experimental design approach, namely response surface methodology. The formulated lotions were evaluated for pH, viscosity, spreadability, homogeneity and in vitro permeation studies through rabbit skin using Franz diffusion cells. Data were fitted to linear, quadratic and cubic models and best fit model was selected to investigate the influence of permeation enhancers, namely propylene glycol and polyethylene glycol on percutaneous absorption of nimesulide from lotion formulations. The best fit quadratic model explained that the enhancer combination at equal levels significantly increased the flux and permeability coefficient. The model was validated by comparing the permeation profile of optimized formulations' predicted and experimental response values, thus, endorsing the prognostic ability of response surface methodology.

  16. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. © 2016 Elsevier Inc. All rights reserved.

  17. Risks, scientific uncertainty and the approach of applying precautionary principle.

    PubMed

    Lo, Chang-fa

    2009-03-01

    The paper intends to clarify the nature and aspects of risks and scientific uncertainty and also to elaborate the approach of application of precautionary principle for the purpose of handling the risk arising from scientific uncertainty. It explains the relations between risks and the application of precautionary principle at international and domestic levels. In the situations where an international treaty has admitted the precautionary principle and in the situation where there is no international treaty admitting the precautionary principle or enumerating the conditions to take measures, the precautionary principle has a role to play. The paper proposes a decision making tool, containing questions to be asked, to help policymakers to apply the principle. It also proposes a "weighing and balancing" procedure to help them decide the contents of the measure to cope with the potential risk and to avoid excessive measures.

  18. An optimal transportation approach for nuclear structure-based pathology

    PubMed Central

    Wang, Wei; Ozolek, John A.; Slepčev, Dejan; Lee, Ann B.; Chen, Cheng; Rohde, Gustavo K.

    2012-01-01

    Nuclear morphology and structure as visualized from histopathology microscopy images can yield important diagnostic clues in some benign and malignant tissue lesions. Precise quantitative information about nuclear structure and morphology, however, is currently not available for many diagnostic challenges. This is due, in part, to the lack of methods to quantify these differences from image data. We describe a method to characterize and contrast the distribution of nuclear structure in different tissue classes (normal, benign, cancer, etc.). The approach is based on quantifying chromatin morphology in different groups of cells using the optimal transportation (Kantorovich-Wasserstein) metric in combination with the Fisher discriminant analysis and multidimensional scaling techniques. We show that the optimal transportation metric is able to measure relevant biological information as it enables automatic determination of the class (e.g. normal vs. cancer) of a set of nuclei. We show that the classification accuracies obtained using this metric are, on average, as good or better than those obtained utilizing a set of previously described numerical features. We apply our methods to two diagnostic challenges for surgical pathology: one in the liver and one in the thyroid. Results automatically computed using this technique show potentially biologically relevant differences in nuclear structure in liver and thyroid cancers. PMID:20977984

  19. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  20. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  1. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  2. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  3. Optimal expansion of water quality monitoring network by fuzzy optimization approach.

    PubMed

    Ning, Shu-Kuang; Chang, Ni-Bin

    2004-02-01

    River reaches are frequently classified with respect to various mode of water utilization depending on the quantity and quality of water resources available at different location. Monitoring of water quality in a river system must collect both temporal and spatial information for comparison with respect to the preferred situation of a water body based on different scenarios. Designing a technically sound monitoring network, however, needs to identify a suite of significant planning objectives and consider a series of inherent limitations simultaneously. It would rely on applying an advanced systems analysis technique via an integrated simulation-optimization approach to meet the ultimate goal. This article presents an optimal expansion strategy of water quality monitoring stations for fulfilling a long-term monitoring mission under an uncertain environment. The planning objectives considered in this analysis are to increase the protection degree in the proximity of the river system with higher population density, to enhance the detection capability for lower compliance areas, to promote the detection sensitivity by better deployment and installation of monitoring stations, to reflect the levels of utilization potential of water body at different locations, and to monitor the essential water quality in the upper stream areas of all water intakes. The constraint set contains the limitations of budget, equity implication, and the detection sensitivity in the water environment. A fuzzy multi-objective evaluation framework that reflects the uncertainty embedded in decision making is designed for postulating and analyzing the underlying principles of optimal expansion strategy of monitoring network. The case study being organized in South Taiwan demonstrates a set of more robust and flexible expansion alternatives in terms of spatial priority. Such an approach uniquely indicates the preference order of each candidate site to be expanded step-wise whenever the budget

  4. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-04-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  5. Practical approach to apply range image sensors in machine automation

    NASA Astrophysics Data System (ADS)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  6. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  7. A comparison of reanalysis techniques: applying optimal interpolation and Ensemble Kalman Filtering to improve air quality monitoring at mesoscale.

    PubMed

    Candiani, Gabriele; Carnevale, Claudio; Finzi, Giovanna; Pisoni, Enrico; Volta, Marialuisa

    2013-08-01

    To fulfill the requirements of the 2008/50 Directive, which allows member states and regional authorities to use a combination of measurement and modeling to monitor air pollution concentration, a key approach to be properly developed and tested is the data assimilation one. In this paper, with a focus on regional domains, a comparison between optimal interpolation and Ensemble Kalman Filter is shown, to stress pros and drawbacks of the two techniques. These approaches can be used to implement a more accurate monitoring of the long-term pollution trends on a geographical domain, through an optimal combination of all the available sources of data. The two approaches are formalized and applied for a regional domain located in Northern Italy, where the PM10 level which is often higher than EU standard limits is measured.

  8. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  9. Optimization strategies in the modelling of SG-SMB applied to separation of phenylalanine and tryptophan

    NASA Astrophysics Data System (ADS)

    Diógenes Tavares Câmara, Leôncio

    2014-03-01

    The solvent-gradient simulated moving bed process (SG-SMB) is the new tendency in the performance improvement if compared to the traditional isocratic solvent conditions. In such SG-SMB process the modulation of the solvent strength leads to significant increase in the purities and productivity followed by reduction in the solvent consumption. A stepwise modelling approach was utilized in the representation of the interconnected chromatographic columns of the system combined with a lumped mass transfer model between the solid and liquid phase. The influence of the solvent modifier was considered applying the Abel model which takes into account the effect of modifier volume fraction over the partition coefficient. Correlation models of the mass transfer parameters were obtained through the retention times of the solutes according to the volume fraction of modifier. The modelling and simulations were carried out and compared to the experimental SG-SMB separation unit of the amino acids Phenylalanine and Tryptophan. The simulation results showed the great potential of the proposed modelling approach in the representation of such complex systems. The simulations showed great agreement fitting the experimental data of the amino acids concentrations both at the extract as well as at the raffinate. A new optimization strategy was proposed in the determination of the best operating conditions which uses the phi-plot concept.

  10. Optimization approaches to volumetric modulated arc therapy planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-01

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  11. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  12. RF cavity design exploiting a new derivative-free trust region optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E

    2015-11-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  13. OSSA - An optimized approach to severe accident management: EPR application

    SciTech Connect

    Sauvage, E. C.; Prior, R.; Coffey, K.; Mazurkiewicz, S. M.

    2006-07-01

    There is a recognized need to provide nuclear power plant technical staff with structured guidance for response to a potential severe accident condition involving core damage and potential release of fission products to the environment. Over the past ten years, many plants worldwide have implemented such guidance for their emergency technical support center teams either by following one of the generic approaches, or by developing fully independent approaches. There are many lessons to be learned from the experience of the past decade, in developing, implementing, and validating severe accident management guidance. Also, though numerous basic approaches exist which share common principles, there are differences in the methodology and application of the guidelines. AREVA/Framatome-ANP is developing an optimized approach to severe accident management guidance in a project called OSSA ('Operating Strategies for Severe Accidents'). There are still numerous operating power plants which have yet to implement severe accident management programs. For these, the option to use an updated approach which makes full use of lessons learned and experience, is seen as a major advantage. Very few of the current approaches covers all operating plant states, including shutdown states with the primary system closed and open. Although it is not necessary to develop an entirely new approach in order to add this capability, the opportunity has been taken to develop revised full scope guidance covering all plant states in addition to the fuel in the fuel building. The EPR includes at the design phase systems and measures to minimize the risk of severe accident and to mitigate such potential scenarios. This presents a difference in comparison with existing plant, for which severe accidents where not considered in the design. Thought developed for all type of plants, OSSA will also be applied on the EPR, with adaptations designed to take into account its favourable situation in that field

  14. Optimization methods applied to the aerodynamic design of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Bingham, G. J.; Riley, M. F.

    1985-01-01

    This paper describes a formal optimization procedure for helicopter rotor blade designs which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares designs and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.

  15. Optimization methods applied to the aerodynamic design of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Bingham, Gene J.; Riley, Michael F.

    1987-01-01

    Described is a formal optimization procedure for helicopter rotor blade design which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general-purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares design and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.

  16. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    PubMed

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  17. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  18. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  19. An efficient approach to obtain optimal load factors for structural design.

    PubMed

    Bojórquez, Juan; Ruiz, Sonia E

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code.

  20. Combustion efficiency optimization and virtual testing: A data-mining approach

    SciTech Connect

    Kusiak, A.; Song, Z.

    2006-08-15

    In this paper, a data-mining approach is applied to optimize combustion efficiency of a coal-fired boiler. The combustion process is complex, nonlinear, and nonstationary. A virtual testing procedure is developed to validate the results produced by the optimization methods. The developed procedure quantifies improvements in the combustion efficiency without performing live testing, which is expensive and time consuming. The ideas introduced in this paper are illustrated with an industrial case study.

  1. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  2. Views on Montessori Approach by Teachers Serving at Schools Applying the Montessori Approach

    ERIC Educational Resources Information Center

    Atli, Sibel; Korkmaz, A. Merve; Tastepe, Taskin; Koksal Akyol, Aysel

    2016-01-01

    Problem Statement: Further studies on Montessori teachers are required on the grounds that the Montessori approach, which, having been applied throughout the world, holds an important place in the alternative education field. Yet it is novel for Turkey, and there are only a limited number of studies on Montessori teachers in Turkey. Purpose of…

  3. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    DTIC Science & Technology

    2011-09-01

    York, NY, 1992. [5] A.V. Savkin, P.N. Pathirana, nd F. Faruqi. The problem of precision missile guidance: LQR and H 00 control frameworks. IEEE...STOCHASTIC REAL-TIME OPTIMAL CONTROL : A PSEUDOSPECTRAL APPROACH FOR BEARING-ONLY TRAJECTORY OPTIMIZATION DISSERTATION Steven M. Ross, Lieutenant...the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENY/11-24 STOCHASTIC REAL-TIME OPTIMAL CONTROL : A

  4. Practical Approaches to Optimize Adolescent Immunization.

    PubMed

    Bernstein, Henry H; Bocchini, Joseph A

    2017-03-01

    With the expansion of the adolescent immunization schedule during the past decade, immunization rates notably vary by vaccine and by state. Addressing barriers to improving adolescent vaccination rates is a priority. Every visit can be viewed as an opportunity to update and complete an adolescent's immunizations. It is essential to continue to focus and refine the appropriate techniques in approaching the adolescent patient and parent in the office setting. Health care providers must continuously strive to educate their patients and develop skills that can help parents and adolescents overcome vaccine hesitancy. Research on strategies to achieve higher vaccination rates is ongoing, and it is important to increase the knowledge and implementation of these strategies. This clinical report focuses on increasing adherence to the universally recommended vaccines in the annual adolescent immunization schedule of the American Academy of Pediatrics, the American Academy of Family Physicians, the Centers for Disease Control and Prevention, and the American Congress of Obstetricians and Gynecologists. This will be accomplished by (1) examining strategies that heighten confidence in immunizations and address patient and parental concerns to promote adolescent immunization and (2) exploring how best to approach the adolescent and family to improve immunization rates. Copyright © 2017 by the American Academy of Pediatrics.

  5. A data-intensive approach to mechanistic elucidation applied to chiral anion catalysis

    PubMed Central

    Milo, Anat; Neel, Andrew J.; Toste, F. Dean; Sigman, Matthew S.

    2015-01-01

    Knowledge of chemical reaction mechanisms can facilitate catalyst optimization, but extracting that knowledge from a complex system is often challenging. Here we present a data-intensive method for deriving and then predictively applying a mechanistic model of an enantioselective organic reaction. As a validating case study, we selected an intramolecular dehydrogenative C-N coupling reaction, catalyzed by chiral phosphoric acid derivatives, in which catalyst-substrate association involves weak, non-covalent interactions. Little was previously understood regarding the structural origin of enantioselectivity in this system. Catalyst and substrate substituent effects were probed by systematic physical organic trend analysis. Plausible interactions between the substrate and catalyst that govern enantioselectivity were identified and supported experimentally, indicating that such an approach can afford an efficient means of leveraging mechanistic insight to optimize catalyst design. PMID:25678656

  6. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  7. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  8. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

  9. Applying Current Approaches to the Teaching of Reading

    ERIC Educational Resources Information Center

    Villanueva de Debat, Elba

    2006-01-01

    This article discusses different approaches to reading instruction for EFL learners based on theoretical frameworks. The author starts with the bottom-up approach to reading instruction, and briefly explains phonics and behaviorist ideas that inform this instructional approach. The author then explains the top-down approach and the new cognitive…

  10. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  11. KL-optimal experimental design for discriminating between two growth models applied to a beef farm.

    PubMed

    Campos-Barreiro, Santiago; López-Fidalgo, Jesús

    2016-02-01

    The body mass growth of organisms is usually represented in terms of what is known as ontogenetic growth models, which represent the relation of dependence between the mass of the body and time. The paper is concerned with a problem of finding an optimal experimental design for discriminating between two competing mass growth models applied to a beef farm. T-optimality was first introduced for discrimination between models but in this paper, KL-optimality based on the Kullback-Leibler distance is used to deal with correlated obsevations since, in this case, observations on a particular animal are not independent.

  12. Multiobjective optimization applied to structural sizing of low cost university-class microsatellite projects

    NASA Astrophysics Data System (ADS)

    Ravanbakhsh, Ali; Franchini, Sebastián

    2012-10-01

    In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.

  13. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to

  14. Target-classification approach applied to active UXO sites

    NASA Astrophysics Data System (ADS)

    Shubitidze, F.; Fernández, J. P.; Shamatava, Irma; Barrowes, B. E.; O'Neill, K.

    2013-06-01

    This study is designed to illustrate the discrimination performance at two UXO active sites (Oklahoma's Fort Sill and the Massachusetts Military Reservation) of a set of advanced electromagnetic induction (EMI) inversion/discrimination models which include the orthonormalized volume magnetic source (ONVMS), joint diagonalization (JD), and differential evolution (DE) approaches and whose power and flexibility greatly exceed those of the simple dipole model. The Fort Sill site is highly contaminated by a mix of the following types of munitions: 37-mm target practice tracers, 60-mm illumination mortars, 75-mm and 4.5'' projectiles, 3.5'', 2.36'', and LAAW rockets, antitank mine fuzes with and without hex nuts, practice MK2 and M67 grenades, 2.5'' ballistic windshields, M2A1-mines with/without bases, M19-14 time fuzes, and 40-mm practice grenades with/without cartridges. The site at the MMR site contains targets of yet different sizes. In this work we apply our models to EMI data collected using the MetalMapper (MM) and 2 × 2 TEMTADS sensors. The data for each anomaly are inverted to extract estimates of the extrinsic and intrinsic parameters associated with each buried target. (The latter include the total volume magnetic source or NVMS, which relates to size, shape, and material properties; the former includes location, depth, and orientation). The estimated intrinsic parameters are then used for classification performed via library matching and the use of statistical classification algorithms; this process yielded prioritized dig-lists that were submitted to the Institute for Defense Analyses (IDA) for independent scoring. The models' classification performance is illustrated and assessed based on these independent evaluations.

  15. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Bedford, J. L.; Webb, S.

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans.

  16. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy.

    PubMed

    Bedford, J L; Webb, S

    2007-01-21

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans.

  17. A multidisciplinary approach to optimization of controlled space structures

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Padula, Sharon L.; Graves, Philip C.; James, Benjamin B.

    1990-01-01

    A fundamental problem facing controls-structures analysts is a means of determining the trade-offs between structural design parameters and control design parameters in meeting some particular performance criteria. Developing a general optimization-based design methodology integrating the disciplines of structural dynamics and controls is a logical approach. The objective of this study is to develop such a method. Classical design methodology involves three phases. The first is structural optimization, wherein structural member sizes are varied to minimize structural mass, subject to open-loop frequency constraints. The next phase integrates control and structure design with control gains as additional design variables. The final phase is analysis of the 'optimal' integrated design phase considering 'real' actuators and 'standard' member sizes. The control gains could be further optimized for fixed structure, and actuator saturation constraints could be imposed. However, such an approach does not take full advantage of opportunities to tailor the structure and control system design as one system.

  18. A simple approach for predicting time-optimal slew capability

    NASA Astrophysics Data System (ADS)

    King, Jeffery T.; Karpenko, Mark

    2016-03-01

    The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.

  19. Reliability based structural optimization - A simplified safety index approach

    NASA Technical Reports Server (NTRS)

    Reddy, Mahidhar V.; Grandhi, Ramana V.; Hopkins, Dale A.

    1993-01-01

    A probabilistic optimal design methodology for complex structures modelled with finite element methods is presented. The main emphasis is on developing probabilistic analysis tools suitable for optimization. An advanced second-moment method is employed to evaluate the failure probability of the performance function. The safety indices are interpolated using the information at mean and most probable failure point. The minimum weight design with an improved safety index limit is achieved by using the extended interior penalty method of optimization. Numerical examples covering beam and plate structures are presented to illustrate the design approach. The results obtained by using the proposed approach are compared with those obtained by using the existing probabilistic optimization techniques.

  20. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  1. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  2. Finite strain response of crimped fibers under uniaxial traction: An analytical approach applied to collagen

    NASA Astrophysics Data System (ADS)

    Marino, Michele; Wriggers, Peter

    2017-01-01

    Composite materials reinforced by crimped fibers intervene in a number of advanced structural applications. Accordingly, constitutive equations describing their anisotropic behavior and explicitly accounting for fiber properties are needed for modeling and design purposes. To this aim, the finite strain response of crimped beams under uniaxial traction is herein addressed by obtaining analytical relationships based on the Principle of Virtual Works. The model is applied to collagen fibers in soft biological tissues, coupling geometric nonlinearities related to fiber crimp with material nonlinearities due to nanoscale mechanisms. Several numerical applications are presented, addressing the influence of geometric and material features. Available experimental data for tendons are reproduced, integrating the proposed approach within an optimization procedure for data fitting. The obtained results highlight the effectiveness of the proposed approach in correlating fibers structure with composite material mechanics.

  3. A general optimization method applied to a vdW-DF functional for water

    NASA Astrophysics Data System (ADS)

    Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi

    In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.

  4. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  5. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  6. A Collective Neurodynamic Optimization Approach to Nonnegative Matrix Factorization.

    PubMed

    Fan, Jianchao; Wang, Jun

    2017-10-01

    Nonnegative matrix factorization (NMF) is an advanced method for nonnegative feature extraction, with widespread applications. However, the NMF solution often entails to solve a global optimization problem with a nonconvex objective function and nonnegativity constraints. This paper presents a collective neurodynamic optimization (CNO) approach to this challenging problem. The proposed collective neurodynamic system consists of a population of recurrent neural networks (RNNs) at the lower level and a particle swarm optimization (PSO) algorithm with wavelet mutation at the upper level. The RNNs act as search agents carrying out precise local searches according to their neurodynamics and initial conditions. The PSO algorithm coordinates and guides the RNNs with updated initial states toward global optimal solution(s). A wavelet mutation operator is added to enhance PSO exploration diversity. Through iterative interaction and improvement of the locally best solutions of RNNs and global best positions of the whole population, the population-based neurodynamic systems are almost sure able to achieve the global optimality for the NMF problem. It is proved that the convergence of the group-best state to the global optimal solution with probability one. The experimental results substantiate the efficacy and superiority of the CNO approach to bound-constrained global optimization with several benchmark nonconvex functions and NMF-based clustering with benchmark data sets in comparison with the state-of-the-art algorithms.

  7. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  8. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming.

    PubMed

    Qin, Sitian; Le, Xinyi; Wang, Jun

    2016-08-19

    This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

  9. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  10. Apply radiomics approach for early stage prognostic evaluation of ovarian cancer patients: a preliminary study

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2017-03-01

    Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.

  11. Financial Planning for Information Technology: Conventional Approaches Need Not Apply.

    ERIC Educational Resources Information Center

    Falduto, Ellen F.

    1999-01-01

    Rapid advances in information technology have rendered conventional approaches to planning and budgeting useless, and no single method is universally appropriate. The most successful planning efforts are consistent with the institution's overall plan, and may combine conventional, opportunistic, and entrepreneurial approaches. Chief financial…

  12. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  13. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  14. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  15. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  16. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  17. Experimental and applied approaches to control Salmonella in broiler processing

    USDA-ARS?s Scientific Manuscript database

    Control of Salmonella on poultry meat should ideally include efforts from the breeder farm to the fully processed and further processed product on through consumer education. In the U.S. regulatory scrutiny is often applied at the chill tank. Therefore, processing parameters are an important compo...

  18. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  19. Teaching Social Science Research: An Applied Approach Using Community Resources.

    ERIC Educational Resources Information Center

    Gilliland, M. Janice; And Others

    A four-week summer project for 100 rural tenth graders in the University of Alabama's Biomedical Sciences Preparation Program (BioPrep) enabled students to acquire and apply social sciences research skills. The students investigated drinking water quality in three rural Alabama counties by interviewing local officials, health workers, and…

  20. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  1. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  2. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  3. A comprehensive business planning approach applied to healthcare.

    PubMed

    Calpin-Davies, P

    The White Paper The New NHS: Modern, Dependable (DoH 1997) clearly expects nurses, in partnership with other professionals, to contribute to the planning and shaping of future healthcare services. This article proposes that comprehensive models of alternative planning frameworks, when applied to healthcare services, can provide nurses with an understanding of the skills they require to participate in the planning process.

  4. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    PubMed

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  5. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  6. Laser therapy applying the differential approaches and biophotometrical elements

    NASA Astrophysics Data System (ADS)

    Mamedova, F. M.; Akbarova, Ju. A.; Umarova, D. A.; Yudin, G. A.

    1995-04-01

    The aim of the present paper is the presentation of biophotometrical data obtained from various anatomic-topographical mouth areas to be used for the development of differential approaches to laser therapy in dentistry. Biophotometrical measurements were carried out using a portative biophotometer, as a portion of a multifunctional equipping system of laser therapy, acupuncture and biophotometry referred to as 'Aura-laser'. The results of biophotometrical measurements allow the implementation of differential approaches to laser therapy of parodontitis and mucous mouth tissue taking their clinic form and rate of disease into account.

  7. New approaches under development: cardiovascular embryology applied to heart disease.

    PubMed

    Degenhardt, Karl; Singh, Manvendra K; Epstein, Jonathan A

    2013-01-01

    Despite many innovative advances in cardiology over the past 50 years, heart disease remains a major killer. The steady progress that continues to be made in diagnostics and therapeutics is offset by the cardiovascular consequences of the growing epidemics of obesity and diabetes. Truly innovative approaches on the horizon have been greatly influenced by new insights in cardiovascular development. In particular, research in stem cell biology, the cardiomyocyte lineage, and the interactions of the myocardium and epicardium have opened the door to new approaches for healing the injured heart.

  8. Pilot-testing an applied competency-based approach to health human resources planning.

    PubMed

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Rob; Langley, Joanne; Hickey, Marjorie; Cook, Amanda

    2013-10-01

    A competency-based approach to health human resources (HHR) planning is one that explicitly considers the spectrum of knowledge, skills and judgement (competencies) required for the health workforce based on the health needs of the relevant population in some specific circumstances. Such an approach is of particular benefit to planners challenged to make optimal use of limited HHR as it allows them to move beyond simply estimating numbers of certain professionals required and plan instead according to the unique mix of competencies available from the existing health workforce. This kind of flexibility is particularly valuable in contexts where healthcare providers are in short supply generally (e.g. in many developing countries) or temporarily due to a surge in need (e.g. a pandemic or other disease outbreak). A pilot application of this approach using the context of an influenza pandemic in one health district of Nova Scotia, Canada, is described, and key competency gaps identified. The approach is also being applied using other conditions in other Canadian jurisdictions and in Zambia.

  9. Dialogical Approach Applied in Group Counselling: Case Study

    ERIC Educational Resources Information Center

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  10. Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.

    ERIC Educational Resources Information Center

    Deonanan, Carlton R.; Deonanan, Venus E.

    This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…

  11. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  12. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    ERIC Educational Resources Information Center

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  13. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    ERIC Educational Resources Information Center

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  14. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  15. Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.

    ERIC Educational Resources Information Center

    Deonanan, Carlton R.; Deonanan, Venus E.

    This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…

  16. Dialogical Approach Applied in Group Counselling: Case Study

    ERIC Educational Resources Information Center

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  17. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  18. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme.

  19. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  20. Effects of optimism on creativity under approach and avoidance motivation.

    PubMed

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement-barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  1. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  2. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  3. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  4. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  5. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  6. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  7. Applying a User-Centered Approach to Interactive Visualisation Design

    NASA Astrophysics Data System (ADS)

    Wassink, Ingo; Kulyk, Olga; van Dijk, Betsy; van der Veer, Gerrit; van der Vet, Paul

    Analysing users in their context of work and finding out how and why they use different information resources is essential to provide interactive visualisation systems that match their goals and needs. Designers should actively involve the intended users throughout the whole process. This chapter presents a user-centered approach for the design of interactive visualisation systems. We describe three phases of the iterative visualisation design process: the early envisioning phase, the global specification phase, and the detailed specification phase. The whole design cycle is repeated until some criterion of success is reached. We discuss different techniques for the analysis of users, their tasks and domain. Subsequently, the design of prototypes and evaluation methods in visualisation practice are presented. Finally, we discuss the practical challenges in design and evaluation of collaborative visualisation environments. Our own case studies and those of others are used throughout the whole chapter to illustrate various approaches.

  8. The Case Study Approach: Some Theoretical, Methodological and Applied Considerations

    DTIC Science & Technology

    2013-06-01

    a large manufacturing organisation in Malaysia . An in- depth case study process (specifically a qualitative approach) was used to illustrate the...researcher closely examined four leaders from generally diverse organisations, who had embraced the learning-organisation concept in order to improve...The researchers focused on the context of learning in the workplace , and they investigated the nature of learning and development opportunities that

  9. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    PubMed Central

    2011-01-01

    Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible

  10. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  11. Approach to optimization of low-power Stirling cryocoolers

    SciTech Connect

    Sullivan, D.B.; Radebaugh, R.; Daney, D.E.; Zimmerman, J.E.

    1983-12-01

    A method for optimizing the design (shape of the displacer) of low power Stirling cryocoolers relative to the power required to operate the systems is described. A variational calculation which includes static conduction, shuttle and radiation losses, as well as regenerator inefficiency, was completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped displacer cryocoolers indicates reasonable agreement.

  12. An approach to optimization of low-power Stirling cryocoolers

    NASA Technical Reports Server (NTRS)

    Sullivan, D. B.; Radebaugh, R.; Daney, D. E.; Zimmerman, J. E.

    1983-01-01

    A method for optimizing the design (shape of the displacer) of low power Stirling cryocoolers relative to the power required to operate the systems is described. A variational calculation which includes static conduction, shuttle and radiation losses, as well as regenerator inefficiency, was completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped displacer cryocoolers indicates reasonable agreement.

  13. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Policy planning under uncertainty: efficient starting populations for simulation-optimization methods applied to municipal solid waste management.

    PubMed

    Huang, Gordon H; Linton, Jonathan D; Yeomans, Julian Scott; Yoogalingam, Reena

    2005-10-01

    Evolutionary simulation-optimization (ESO) techniques can be adapted to model a wide variety of problem types in which system components are stochastic. Grey programming (GP) methods have been previously applied to numerous environmental planning problems containing uncertain information. In this paper, ESO is combined with GP for policy planning to create a hybrid solution approach named GESO. It can be shown that multiple policy alternatives meeting required system criteria, or modelling-to-generate-alternatives (MGA), can be quickly and efficiently created by applying GESO to this case data. The efficacy of GESO is illustrated using a municipal solid waste management case taken from the regional municipality of Hamilton-Wentworth in the Province of Ontario, Canada. The MGA capability of GESO is especially meaningful for large-scale real-world planning problems and the practicality of this procedure can easily be extended from MSW systems to many other planning applications containing significant sources of uncertainty.

  15. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato

    PubMed Central

    Tran, Dinh T.; Hertog, Maarten L. A. T. M.; Tran, Thi L. H.; Quyen, Nguyen T.; Van de Poel, Bram; Mata, Clara I.; Nicolaï, Bart M.

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. “Savior”) was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams. PMID:28473843

  16. Innovization procedure applied to a multi-objective optimization of a biped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2013-10-01

    This paper proposes an Innovization procedure approach for a bio-inspired biped gait locomotion controller. We combine a multi-objective evolutionary algorithm and a bio-inspired Central Patterns Generator locomotion controller to generates the necessary limb movements to perform the walking gait of a biped robot. The search for the best set of CPG parameters is optimized by considering multiple objectives along a staged evolution. An innovation analysis is issued to verify relationships between the parameters and the objectives and between objectives themselves in order to find relevant motor behaviors characteristics. The simulation results show the effectiveness of the proposed approach.

  17. TSP based Evolutionary optimization approach for the Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kouki, Zoulel; Chaar, Besma Fayech; Ksouri, Mekki

    2009-03-01

    Vehicle Routing and Flexible Job Shop Scheduling Problems (VRP and FJSSP) are two common hard combinatorial optimization problems that show many similarities in their conceptual level [2, 4]. It was proved for both problems that solving techniques like exact methods fail to provide good quality solutions in a reasonable amount of time when dealing with large scale instances [1, 5, 14]. In order to overcome this weakness, we decide in the favour of meta heuristics and we focalize on evolutionary algorithms that have been successfully used in scheduling problems [1, 5, 9]. In this paper we investigate the common properties of the VRP and the FJSSP in order to provide a new controlled evolutionary approach for the CVRP optimization inspired by the FJSSP evolutionary optimization algorithms introduced in [10].

  18. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  19. Anatomy of the pineal region applied to its surgical approach.

    PubMed

    Simon, E; Afif, A; M'Baye, M; Mertens, P

    2015-01-01

    The pineal region is situated in the posterior part of the incisural space. This region includes the pineal body inside the quadrigeminal arachnoidal cistern. This article reviews the anatomic features of this region, with particular emphasis on those aspects of importance for surgical access to the pineal region. Five cadaver heads fixed in 10% formalin and injected with colored latex were used for anatomic dissection (five other specimens were also prepared and dissected to illustrate the articles on surgical techniques and approaches presented elsewhere in this issue). The pineal body is surrounded by several important structures such as: posterior part of the third ventricle, tectum, the complex of the great cerebral vein of Galen, pulvinar nuclei of the thalamus and splenium of corpus callosum. The surgical approach of the pineal body, whatever the route or the technique used (microsurgical, endoscopic or stereotactic), creates a great challenge for the neurosurgeons due to its location in the deep part of the brain and its close relationships with complex surrounded vascular structures. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  20. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  1. An evolutionary based Bayesian design optimization approach under incomplete information

    NASA Astrophysics Data System (ADS)

    Srivastava, Rupesh; Deb, Kalyanmoy

    2013-02-01

    Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.

  2. Total Risk Approach in Applying PRA to Criticality Safety

    SciTech Connect

    Huang, S T

    2005-03-24

    As nuclear industry continues marching from an expert-base support to more procedure-base support, it is important to revisit the total risk concept to criticality safety. A key objective of criticality safety is to minimize total criticality accident risk. The purpose of this paper is to assess key constituents of total risk concept pertaining to criticality safety from an operations support perspective and to suggest a risk-informed means of utilizing criticality safety resources for minimizing total risk. A PRA methodology was used to assist this assessment. The criticality accident history was assessed to provide a framework for our evaluation. In supporting operations, the work of criticality safety engineers ranges from knowing the scope and configurations of a proposed operation, performing criticality hazards assessment to derive effective controls, assisting in training operators, response to floor questions, surveillance to ensure implementation of criticality controls, and response to criticality mishaps. In a compliance environment, the resource of criticality safety engineers is increasingly being directed towards tedious documentation effort to meet some regulatory requirements to the effect of weakening the floor support for criticality safety. By applying a fault tree model to identify the major contributors of criticality accidents, a total risk picture is obtained to address relative merits of various actions. Overall, human failure is the key culprit in causing criticality accidents. Factors such as failure to follow procedures, lacks of training, lack of expert support at the floor level etc. are main contributors. Other causes may include lack of effective criticality controls such as inadequate criticality safety evaluation. Not all of the causes are equally important in contributing to criticality mishaps. Applying the limited resources to strengthen the weak links would reduce risk more than continuing emphasis on the strong links of

  3. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  4. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  5. An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis

    DTIC Science & Technology

    2013-03-01

    AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks

  6. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach.

    PubMed

    Pinto, Rafael S; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ω(T)Lω, where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  7. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  8. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  9. A global optimization approach for Lennard-Jones microclusters

    NASA Astrophysics Data System (ADS)

    Maranas, Costas D.; Floudas, Christodoulos A.

    1992-11-01

    A global optimization approach is proposed for finding the global minimum energy configuration of Lennard-Jones microclusters. First, the original nonconvex total potential energy function, composed by rational polynomials, is transformed to the difference of two convex functions (DC transformation) via a novel procedure performed for each pair potential that constitute the total potential energy function. Then, a decomposition strategy based on the global optimization (GOP) algorithm [C. A. Floudas and V. Visweswaran, Comput. Chem. Eng. 14, 1397 (1990); V. Visweswaran and C. A. Floudas, ibid. 14, 1419 (1990); Proc. Process Systems Eng. 1991, I.6.1; C. A. Floudas and V. Visweswaran, J. Opt. Theory Appl. (in press)] is designed to provide tight bounds on the global minimum through the solutions of a sequence of relaxed dual subproblems. A number of theoretical results are included which expedite the computational effort by exploiting the special mathematical structure of the problem. The proposed approach attains ɛ convergence to the global minimum in a finite number of iterations. Based on this procedure, global optimum solutions are generated for small microclusters n≤7. For larger clusters 8≤N≤24 tight lower and upper bounds on the global solution are provided serving as excellent initial points for local optimization approaches. Finally, improved lower bounds on the minimum interparticle distance at the global minimum are provided.

  10. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  11. Applying electrical utility least-cost approach to transportation planning

    SciTech Connect

    McCoy, G.A.; Growdon, K.; Lagerberg, B.

    1994-09-01

    Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.

  12. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  13. New Approach to Ultrasonic Spectroscopy Applied to Flywheel Rotors

    NASA Technical Reports Server (NTRS)

    Harmon, Laura M.; Baaklini, George Y.

    2002-01-01

    Flywheel energy storage devices comprising multilayered composite rotor systems are being studied extensively for use in the International Space Station. A flywheel system includes the components necessary to store and discharge energy in a rotating mass. The rotor is the complete rotating assembly portion of the flywheel, which is composed primarily of a metallic hub and a composite rim. The rim may contain several concentric composite rings. This article summarizes current ultrasonic spectroscopy research of such composite rings and rims and a flat coupon, which was manufactured to mimic the manufacturing of the rings. Ultrasonic spectroscopy is a nondestructive evaluation (NDE) method for material characterization and defect detection. In the past, a wide bandwidth frequency spectrum created from a narrow ultrasonic signal was analyzed for amplitude and frequency changes. Tucker developed and patented a new approach to ultrasonic spectroscopy. The ultrasonic system employs a continuous swept-sine waveform and performs a fast Fourier transform on the frequency spectrum to create the spectrum resonance spacing domain, or fundamental resonant frequency. Ultrasonic responses from composite flywheel components were analyzed at Glenn to assess this NDE technique for the quality assurance of flywheel applications.

  14. Optimization of electrospinning parameters for polyacrylonitrile-MgO nanofibers applied in air filtration.

    PubMed

    Dehghan, Somayeh Farhang; Golbabaei, Farideh; Maddah, Bozorgmehr; Latifi, Masoud; Pezeshk, Hamid; Hasanzadeh, Mahdi; Akbar-Khanzadeh, Farhang

    2016-09-01

    The present study aimed to optimize the electrospinning parameters for polyacrylonitrile (PAN) nanofibers containing MgO nanoparticle to obtain the appropriate fiber diameter and mat porosity to be applied in air filtration. Optimization of applied voltage, solution concentration, and spinning distance was performed using response surface methodology. In total, 15 trials were done according to the prepared study design. Fiber diameter and porosity were measured using scanning electron microscopic (SEM) image analysis. For air filtration testing, the nanofiber mat was produced based on the suggested optimum conditions for electrospinning. According to the results, the lower solution concentration favored the thinner fiber. The larger diameter gave a higher porosity. At a given spinning distance, there was a negative correlation between fiber diameter and applied voltage. Moreover, there were curvilinear relationships between porosity and both spinning distance and applied voltage at any concentration. It was also concluded that the developed filter medium could be comparable to the high-efficiency particulate air (HEPA) filter in terms of collection efficiency and pressure drop. The empirical models presented in this study can provide an orientation to the subsequent experiments to form uniform and continuous nanofibers for future application in air purification. High-efficiency filtration is becoming more important, due to decreasing trends air quality. Effective filter media are increasingly needed in industries applying clean-air technologies, and the necessity for developing the high-performance air filters has been more and more felt. Nanofibrous filter media that are mostly fabricated via electrospinning technique have attracted considerable attention in the last decade. The present study aimed to develop the electrospun PAN-containing MgO nanoparticle (using the special functionalities such as absorption and adsorption characteristics, antibacterial

  15. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    PubMed

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  16. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles

    PubMed Central

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time. PMID:28033385

  17. Geophysical approaches applied in the ancient theatre of Demetriada, Volos

    NASA Astrophysics Data System (ADS)

    Sarris, Apostolos; Papadopoulos, Nikos; Déderix, Sylviane; Salvi, Maria-Christina

    2013-08-01

    The city of Demetriada was constructed around 294-292 BC and became a stronghold of the Macedonian navy fleet, whereas in the Roman period it experienced significant growth and blossoming. The ancient theatre of the town was constructed at the same time with the foundation of the city, without being used for 2 centuries (1st ce. BC - 1st ce. A.D.) and being completely abandoned after the 4th ce. A.D., to be used only as a quarry for extraction of building material for Christian basilicas in the area. The theatre was found in 1809 and excavations took place in various years since 1907. Geophysical approaches were exploited recently in an effort to map the subsurface of the surrounding area of the theatre and help the reconstruction works of it. Magnetic gradiometry, Ground Penetrating Radar (GPR) and Electrical Resistivity Tomogrpahy (ERT) techniques were employed for mapping the area of the orchestra and the scene of the theatre, together with the area extending to the south of the theatre. A number of features were recognized by the magnetic techniques including older excavation trenches and the pilar of the stoa of the proscenium. The different occupation phases of the area have been manifested through the employment of tomographic and stratigraphic geophysical techniques like three-dimensional ERT and GPR. Architectural orthogonal structures aligned in a S-N direction have been correlated to the already excavated buildings of the ceramic workshop. The workshop seems to expand in a large section of the area which was probably constructed after the final abandonment of the theatre.

  18. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  19. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    SciTech Connect

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; Newlands, Nathaniel K.

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach to address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.

  20. EMG assisted optimization: a hybrid approach for estimating muscle forces in an indeterminate biomechanical model.

    PubMed

    Cholewicki, J; McGill, S M

    1994-10-01

    There are two basic approaches to estimate individual muscle forces acting on a joint, given the indeterminacy of moment balance equations: optimization and electromyography (EMG) assisted. Each approach is characterized by unique advantages and liabilities. With this in mind, a new hybrid method which combines the advantages of both of these traditional approaches, termed 'EMG assisted optimization' (EMGAO), was described. In this method, minimal adjustments are applied to the individual muscle forces estimated from EMG, so that all moment equilibrium equations are satisfied in three dimensions. The result is the best possible match between physiologically observed muscle activation patterns and the predicted forces, while satisfying the moment constraints about all three joint axes. Several forms of the objective function are discussed and their effect on individual muscle adjustments is illustrated in a simple two-dimensional example.

  1. Unified approach to optimal control systems with state constraints

    NASA Astrophysics Data System (ADS)

    Murillo, Martin Julio

    Many engineering systems have constraints or limitations in terms of voltage, current, speed, pressure, temperature, path, etc. In this dissertation, the optimal control of dynamical systems with state constraints is addressed. A unified approach that is simultaneously applicable to both continuous-time and discrete-time systems is developed so that there is no need, as being presently done, to develop separate methodologies for continuous-tune and discrete-time systems. The main contributions of the dissertation are: (1) development of a "slack variable" approach to solve discrete-time state constrained problems1, (2) development of a unified approach to solve state unconstrained problems, (3) development of a unified approach to solve state constrained problems, and (4) development of numerical algorithms and software implementation to solve these problems. 1This work was accepted for presentation with the citation: M. Murillo and D. S. Naidu, "Discrete-time optimal control systems with state constraints", AIAA Guidance, Control, and Navigation (GN&C) Conference and Exhibit, Monterey, CA, August 5--8, 2002.

  2. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  3. Continuous intensity map optimization (CIMO): a novel approach to leaf sequencing in step and shoot IMRT.

    PubMed

    Cao, Daliang; Earl, Matthew A; Luan, Shuang; Shepard, David M

    2006-04-01

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases were selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle3 treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.

  4. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  5. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  6. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  7. Extended RF shimming: Sequence-level parallel transmission optimization applied to steady-state free precession MRI of the heart.

    PubMed

    Beqiri, Arian; Price, Anthony N; Padormo, Francesco; Hajnal, Joseph V; Malik, Shaihan J

    2017-06-01

    Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B1(+) ) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a 'sequence-level' optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady-state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight-channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single-channel operation, a mean-squared-error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  8. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  9. Phase retrieval with transverse translation diversity: a nonlinear optimization approach.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-05-12

    We develop and test a nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern. Analytical expressions for the gradient of a squared-error metric with respect to the object, illumination and translations allow joint optimization of the object and system parameters. This approach achieves superior reconstructions, with respect to a previously reported technique [H. M. L. Faulkner and J. M. Rodenburg, Phys. Rev. Lett. 93, 023903 (2004)], when the system parameters are inaccurately known or in the presence of noise. Applicability of this method for samples that are smaller than the illumination pattern is explored.

  10. Optimal approach to quantum communication using dynamic programming

    PubMed Central

    Jiang, Liang; Taylor, Jacob M.; Khaneja, Navin; Lukin, Mikhail D.

    2007-01-01

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states. PMID:17959783

  11. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  12. Optimization of applied non-axisymmetric magnetic perturbations using multimodal plasma response on DIII-D

    NASA Astrophysics Data System (ADS)

    Weisberg, D. B.; Paz-Soldan, C.; Lanctot, M. J.; Strait, E. J.; Evans, T. E.

    2016-10-01

    The plasma response to proposed 3D coil geometries in the DIII-D tokamak is investigated using the linear MHD plasma response code MARS-F. An extensive examination of low- and high-field side coil arrangements shows the potential to optimize the coupling between imposed non-axisymmetric magnetic perturbations and the total plasma response by varying the toroidal and poloidal spectral content of the applied field. Previous work has shown that n=2 and n=3 perturbations can suppress edge-localized modes (ELMs) in cases where the applied field's coupling to resonant surfaces is enhanced by amplifying marginally-stable kink modes. This research is extended to higher n-number configurations of 2 to 3 rows with up to 12 coils each in order to advance the physical understanding and optimization of both the resonant and non-resonant responses. Both in- and ex-vessel configurations are considered. The plasma braking torque is also analyzed, and coil geometries with favorable plasma coupling characteristics are discussed. Work supported by GA internal funds.

  13. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  14. A Bayesian optimization approach for wind farm power maximization

    NASA Astrophysics Data System (ADS)

    Park, Jinkyoo; Law, Kincho H.

    2015-03-01

    The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.

  15. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  16. Approaching direct optimization of as-built lens performance

    NASA Astrophysics Data System (ADS)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  17. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    DTIC Science & Technology

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  18. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  19. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

  20. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  1. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  2. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  3. Silanization of glass chips—A factorial approach for optimization

    NASA Astrophysics Data System (ADS)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  4. A hypothesis-driven approach to optimize field campaigns

    NASA Astrophysics Data System (ADS)

    Nowak, Wolfgang; Rubin, Yoram; de Barros, Felipe P. J.

    2012-06-01

    Most field campaigns aim at helping in specified scientific or practical tasks, such as modeling, prediction, optimization, or management. Often these tasks involve binary decisions or seek answers to yes/no questions under uncertainty, e.g., Is a model adequate? Will contamination exceed a critical level? In this context, the information needs of hydro(geo)logical modeling should be satisfied with efficient and rational field campaigns, e.g., because budgets are limited. We propose a new framework to optimize field campaigns that defines the quest for defensible decisions as the ultimate goal. The key steps are to formulate yes/no questions under uncertainty as Bayesian hypothesis tests, and then use the expected failure probability of hypothesis testing as objective function. Our formalism is unique in that it optimizes field campaigns for maximum confidence in decisions on model choice, binary engineering or management decisions, or questions concerning compliance with environmental performance metrics. It is goal oriented, recognizing that different models, questions, or metrics deserve different treatment. We use a formal Bayesian scheme called PreDIA, which is free of linearization, and can handle arbitrary data types, scientific tasks, and sources of uncertainty (e.g., conceptual, physical, (geo)statistical, measurement errors). This reduces the bias due to possibly subjective assumptions prior to data collection and improves the chances of successful field campaigns even under conditions of model uncertainty. We illustrate our approach on two instructive examples from stochastic hydrogeology with increasing complexity.

  5. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  6. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  7. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  8. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  9. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  10. Optimal administrative scale for planning public services: a social cost model applied to Flemish hospital care.

    PubMed

    Blank, Jos L T; van Hulst, Bart

    2015-01-01

    In choosing the scale of public services, such as hospitals, both economic and public administrative considerations play important roles. The scale and the corresponding spatial distribution of public institutions have consequences for social costs, defined as the institutions' operating costs and the users' travel costs (which include the money and time costs). Insight into the relationship between scale and spatial distribution and social costs provides a practical guide for the best possible administrative planning level. This article presents a purely economic model that is suitable for deriving the optimal scale for public services. The model also reveals the corresponding optimal administrative planning level from an economic perspective. We applied this model to hospital care in Flanders for three different types of care. For its application, we examined the social costs of hospital services at different levels of administrative planning. The outcomes show that the social costs of rehabilitation in Flanders with planning at the urban level (38 areas) are 11% higher than those at the provincial level (five provinces). At the regional level (18 areas), the social costs of rehabilitation are virtually equal to those at the provincial level. For radiotherapy, there is a difference of 88% in the social costs between the urban and the provincial level. For general care, there are hardly any cost differences between the three administrative levels. Thus, purely from the perspective of social costs, rehabilitation should preferably be planned at the regional level, general services at the urban level and radiotherapy at the provincial level.

  11. Dual adaptive filtering by optimal projection applied to filter muscle artifacts on EEG and comparative study.

    PubMed

    Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings.

  12. Genetic algorithm applied to the optimization of quantum cascade lasers with second harmonic generation

    SciTech Connect

    Gajić, A.; Radovanović, J. Milanović, V.; Indjin, D.; Ikonić, Z.

    2014-02-07

    A computational model for the optimization of the second order optical nonlinearities in GaInAs/AlInAs quantum cascade laser structures is presented. The set of structure parameters that lead to improved device performance was obtained through the implementation of the Genetic Algorithm. In the following step, the linear and second harmonic generation power were calculated by self-consistently solving the system of rate equations for carriers and photons. This rate equation system included both stimulated and simultaneous double photon absorption processes that occur between the levels relevant for second harmonic generation, and material-dependent effective mass, as well as band nonparabolicity, were taken into account. The developed method is general, in the sense that it can be applied to any higher order effect, which requires the photon density equation to be included. Specifically, we have addressed the optimization of the active region of a double quantum well In{sub 0.53}Ga{sub 0.47}As/Al{sub 0.48}In{sub 0.52}As structure and presented its output characteristics.

  13. Optimization of supercritical carbon dioxide extraction of silkworm pupal oil applying the response surface methodology.

    PubMed

    Wei, Zhao-Jun; Liao, Ai-Mei; Zhang, Hai-Xiang; Liu, Jian; Jiang, Shao-Tong

    2009-09-01

    Supercritical carbon dioxide extraction (SC-CO(2)) of oil from desilked silkworm pupae was performed. Response surface methodology (RSM) was applied to optimize the parameters of SC-CO(2) extraction. The effects of independent variables, including pressure, temperature, CO(2) flow rate, and extraction time, on the yield of oil were investigated. The statistical analysis showed that the pressure, extraction time, and the quadratics of pressure, extraction time, and CO(2) flow rate, as well as the interactions between pressure and temperature, and temperature and flow rate, showed significant effects on oil yield. The optimal extraction condition for oil yield within the experimental range of the variables researched was at 324.5 bar, 39.6 degrees C, 131.2 min, and 19.3 L/h. At this condition, the yield of oil was predicted to be 29.73%. The obtained silkworm pupal oil contained more than 68% total unsaturated fatty acids, and alpha-linolenic acid (ALA) accounted for 27.99% in the total oil.

  14. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  15. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  16. coupled experimental-modeling approach for estimation of root zone leaching of applied irrigation water and fertilizers

    NASA Astrophysics Data System (ADS)

    Kandelous, M.; Moradi, A. B.; Hopmans, J. W.; Burger, M.

    2012-12-01

    Micro-irrigation methods have proven to be highly effective in achieving the desired crop yields, but there is increasing evidence suggesting the need for the optimization of irrigation scheduling and management, thereby achieving sustainable agricultural practices, while minimizing losses of applied water and fertilizers at the field scale. Moreover, sustainable irrigation systems must maintain a long-term salt balance that minimizes both salinity impacts on crop production and salt leaching to the groundwater. To optimize cropping system efficiency and irrigation/fertigation practices, irrigation and fertilizers must be applied at the right concentration, place, and time to ensure maximum root uptake. However, the applied irrigation water and dissolved fertilizer, as well as root growth and associated nutrient and water uptake, interact with soil properties and nutrient sources in a complex manner that cannot easily be resolved with 'experience' and field experimentation alone. Therefore, a coupling of experimentation and modeling is required to unravel the complexities resulting from spatial variations of soil texture and layering often found in agricultural fields. We present experimental approaches that provide the necessary data on soil moisture, water potential, and nitrate concentration and multi-dimensional modeling of unsaturated water flow and solute transport to evaluate and optimize irrigation and fertility management practices for multiple locations, crop types, and irrigation systems.

  17. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.

  18. A new approach to the optimal target selection problem

    NASA Astrophysics Data System (ADS)

    Elson, E. C.; Bassett, B. A.; van der Heyden, K.; Vilakazi, Z. Z.

    2007-03-01

    Context: This paper addresses a common problem in astronomy and cosmology: to optimally select a subset of targets from a larger catalog. A specific example is the selection of targets from an imaging survey for multi-object spectrographic follow-up. Aims: We present a new heuristic optimisation algorithm, HYBRID, for this purpose and undertake detailed studies of its performance. Methods: HYBRID combines elements of the simulated annealing, MCMC and particle-swarm methods and is particularly successful in cases where the survey landscape has multiple curvature or clustering scales. Results: HYBRID consistently outperforms the other methods, especially in high-dimensionality spaces with many extrema. This means many fewer simulations must be run to reach a given performance confidence level and implies very significant advantages in solving complex or computationally expensive optimisation problems. Conclusions: .HYBRID outperforms both MCMC and SA in all cases including optimisation of high dimensional continuous surfaces indicating that HYBRID is useful far beyond the specific problem of optimal target selection. Future work will apply HYBRID to target selection for the new 10 m Southern African Large Telescope in South Africa.

  19. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  20. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  1. Optimal linearized Poisson-Boltzmann theory applied to the simulation of flexible polyelectrolytes in solution.

    PubMed

    Bathe, M; Grodzinsky, A J; Tidor, B; Rutledge, G C

    2004-10-22

    Optimal linearized Poisson-Boltzmann (OLPB) theory is applied to the simulation of flexible polyelectrolytes in solution. As previously demonstrated in the contexts of the cell model [H. H. von Grunberg, R. van Roij, and G. Klein, Europhys. Lett. 55, 580 (2001)] and a particle-based model [B. Beresfordsmith, D. Y. C. Chan, and D. J. Mitchell, J. Colloid Interface Sci. 105, 216 (1985)] of charged colloids, OLPB theory is applicable to thermodynamic states at which conventional, Debye-Huckel (DH) linearization of the Poisson-Boltzmann equation is rendered invalid by violation of the condition that the electrostatic coupling energy of a mobile ion be much smaller than its thermal energy throughout space, |nu(alpha)e psi(r)|applied to a concentrated solution of freely jointed chains. The osmotic pressure is computed at various reservoir ionic strengths and compared with results from the conventional DH model for polyelectrolytes. Through comparison with the cylindrical cell model for polyelectrolytes, it is demonstrated that the OLPB model yields the correct osmotic pressure behavior with respect to nonlinear theory where conventional DH theory fails, namely at large ratios of mean counterion density to reservoir salt density, when the Donnan potential is large. (c) 2004 American Institute of Physics.

  2. Optimization of clean fractionation process applied to switchgrass to produce pulp for enzymatic hydrolysis.

    PubMed

    Brudecki, Grzegorz; Cybulska, Iwona; Rosentrater, Kurt

    2013-03-01

    The purpose of this study was to fractionate switchgrass (SG) to obtain hemicellulose-, lignin-rich fractions and highly digestible pulp, using a clean fractionation (CF) approach. The main objective was to produce highest glucose yield in the enzymatic hydrolysis of pulp. Effects of processing factors such as time (10-50 min), temperature (120-160 °C), catalyst concentration (0.21-0.93% w/w sulfuric acid) and organic solvent mixture composition (7-43% w/w methyl isobutyl ketone) were evaluated. Response surface methodology and central composite design were used for process optimization and statistical analyses. High lignin (75-93%) and xylan (83-100%) removal from biomass were obtained, leaving solid pulp rich in glucan (78-94%). High enzymatic hydrolysis glucose yields (more than 90%) were obtained for selected optimal conditions. Pulp can be used for ethanol production while separated xylan and lignin fractions can be used as a feedstock for value-added products which suggests the applicability of clean fractionation technology in a biorefinery concept. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented

  4. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Zhang, Xuejun; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  5. Optimization of glycerol fed-batch fermentation in different reactor states: a variable kinetic parameter approach.

    PubMed

    Xie, Dongming; Liu, Dehua; Zhu, Haoli; Zhang, Jianan

    2002-05-01

    To optimize the fed-batch processes of glycerol fermentation in different reactor states, typical bioreactors including 500-mL shaking flask, 600-mL and 15-L airlift loop reactor, and 5-L stirred vessel were investigated. It was found that by reestimating the values of only two variable kinetic parameters associated with physical transport phenomena in a reactor, the macrokinetic model of glycerol fermentation proposed in previous work could describe well the batch processes in different reactor states. This variable kinetic parameter (VKP) approach was further applied to model-based optimization of discrete-pulse feed (DPF) strategies of both glucose and corn steep slurry for glycerol fed-batch fermentation. The experimental results showed that, compared with the feed strategies determined just by limited experimental optimization in previous work, the DPF strategies with VKPs adjusted could improve glycerol productivity at least by 27% in the scale-down and scale-up reactor states. The approach proposed appeared promising for further modeling and optimization of glycerol fermentation or the similar bioprocesses in larger scales.

  6. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  7. Frost Formation: Optimizing solutions under a finite volume approach

    NASA Astrophysics Data System (ADS)

    Bartrons, E.; Perez-Segarra, C. D.; Oliet, C.

    2016-09-01

    A three-dimensional transient formulation of the frost formation process is developed by means of a finite volume approach. Emphasis is put on the frost surface boundary condition as well as the wide range of empirical correlations related to the thermophysical and transport properties of frost. A study of the numerical solution is made, establishing the parameters that ensure grid independence. Attention is given to the algorithm, the discretised equations and the code optimization through dynamic relaxation techniques. A critical analysis of four cases is carried out by comparing solutions of several empirical models against tested experiments. As a result, a discussion on the performance of such parameters is started and a proposal of the most suitable models is presented.

  8. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  9. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  10. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  11. Selection of reserves for woodland caribou using an optimization approach.

    PubMed

    Schneider, Richard R; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation.

  12. Using tailored methodical approaches to achieve optimal science outcomes

    NASA Astrophysics Data System (ADS)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  13. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  14. An improved ant colony optimization approach for optimization of process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach.

  15. A hybrid approach using chaotic dynamics and global search algorithms for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Igeta, Hideki; Hasegawa, Mikio

    Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.

  16. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2016-12-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio (S/N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  17. Optimal pulse shapes for magnetic stimulation of fibers: An analytical approach using the excitation functional.

    PubMed

    Suarez-Bagnasco, Diego; Armentano-Feijoo, R; Suarez-Antola, R

    2010-01-01

    An analytical approach to threshold problems in functional magnetic stimulation of nerve and skeletal muscle fibers was recently proposed, framed in the concept of excitation functional. Three generations of available equipments for magnetic stimulation are briefly considered, stressing the corresponding pulse shape in the stimulation coils. Using the criterion of minimum energy dissipated in biological tissues, an optimal shape for a current pulse in the coil that produces a just threshold depolarization in a nerve or skeletal muscle fiber is found. The method can be further developed and applied to other threshold problems in functional electric stimulation.

  18. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    NASA Astrophysics Data System (ADS)

    Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel

    2017-02-01

    The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  19. Eddy Currents applied to de-tumbling of space debris: feasibility analysis, design and optimization aspects

    NASA Astrophysics Data System (ADS)

    Ortiz Gómez, Natalia; Walker, Scott J. I.

    Existent studies on the evolution of the space debris population show that both mitigation measures and active debris removal methods are necessary in order to prevent the current population from growing. Active debris removal methods, which require contact with the target, show complications if the target is rotating at high speeds. Observed rotations go up to 50 deg/s combined with precession and nutation motions. “Natural” rotational damping in upper stages has been observed for some debris objects. This phenomenon occurs due to the eddy currents induced by the Earth’s magnetic field in the predominantly conductive materials of these man made rotating objects. The idea presented in this paper is to submit the satellite to an enhanced magnetic field in order to subdue it and damp its rotation, thus allowing for its subsequent de-orbiting phase. The braking method that is proposed has the advantage of avoiding any kind of mechanical contact with the target. A deployable structure with a magnetic coil at its end is used to induce the necessary braking torques on the target. This way, the induced magnetic field is created far away from the chaseŕs main body avoiding undesirable effects on its instruments. This paper focuses on the overall design of the system and the parameters considered are: the braking time, the power required, the mass of the deployable structure and the magnetic coil system, the size of the coil, the materials selection and distance to the target. The different equations that link all these variables together are presented. Nevertheless, these equations lead to several variables which make it possible to approach the engineering design as an optimization problem. Given that only a few variables remain, no sophisticated numerical methods are called for, and a simple graphical approach can be used to display the optimum solutions. Some parameters are open to future refinements as the optimization problem must be contemplated globally in

  20. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  1. Convergence behavior of multireference perturbation theory: Forced degeneracy and optimization partitioning applied to the beryllium atom

    NASA Astrophysics Data System (ADS)

    Finley, James P.; Chaudhuri, Rajat K.; Freed, Karl F.

    1996-07-01

    High-order multireference perturbation theory is applied to the 1S states of the beryllium atom using a reference (model) space composed of the \\|1s22s2> and the \\|1s22p2> configuration-state functions (CSF's), a system that is known to yield divergent expansions using Mo/ller-Plesset and Epstein-Nesbet partitioning methods. Computations of the eigenvalues are made through 40th order using forced degeneracy (FD) partitioning and the recently introduced optimization (OPT) partitioning. The former forces the 2s and 2p orbitals to be degenerate in zeroth order, while the latter chooses optimal zeroth-order energies of the (few) most important states. Our methodology employs simple models for understanding and suggesting remedies for unsuitable choices of reference spaces and partitioning methods. By examining a two-state model composed of only the \\|1s22p2> and \\|1s22s3s> states of the beryllium atom, it is demonstrated that the full computation with 1323 CSF's can converge only if the zeroth-order energy of the \\|1s22s3s> Rydberg state from the orthogonal space lies below the zeroth-order energy of the \\|1s22p2> CSF from the reference space. Thus convergence in this case requires a zeroth-order spectral overlap between the orthogonal and reference spaces. The FD partitioning is not capable of generating this type of spectral overlap and thus yields a divergent expansion. However, the expansion is actually asymptotically convergent, with divergent behavior not displayed until the 11th order because the \\|1s22s3s> Rydberg state is only weakly coupled with the \\|1s22p2> CSF and because these states are energetically well separated in zeroth order. The OPT partitioning chooses the correct zeroth-order energy ordering and thus yields a convergent expansion that is also very accurate in low orders compared to the exact solution within the basis.

  2. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    ERIC Educational Resources Information Center

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  3. Applying the Subject "Cell" through Constructivist Approach during Science Lessons and the Teacher's View

    ERIC Educational Resources Information Center

    Dogru, Mustafa; Kalender, Suna

    2007-01-01

    In this study our purpose is to determine how the teachers are applying the structuralist approach in their classes by classifying the teachers according to graduated faculty, department and their years in the duty. Besides understanding the difference of the effects of structuralist approach and traditional education method on student's success…

  4. Applying the Subject "Cell" through Constructivist Approach during Science Lessons and the Teacher's View

    ERIC Educational Resources Information Center

    Dogru, Mustafa; Kalender, Suna

    2007-01-01

    In this study our purpose is to determine how the teachers are applying the constructivist approach in their classes by classifying the teachers according to graduated faculty, department and their years in the duty. Besides understanding the difference of the effects of constructivist approach and traditional education method on student success…

  5. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  6. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    PubMed

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  7. A new optimization approach for the calibration of an ultrasound probe using a 3D optical localizer.

    PubMed

    Dardenne, G; Cano, J D Gil; Hamitouche, C; Stindel, E; Roux, C

    2007-01-01

    This paper describes a fast procedure for the calibration of an ultrasound (US) probe using a 3D optical localizer. This calibration step allows us to obtain the 3D position of any point located on the 2D ultrasonic (US) image. To carry out correctly this procedure, a phantom of known geometric properties is probed and these geometries are found in the US images. A segmentation step is applied in order to obtain automatically the needed information in the US images and then, an optimization approach is performed to find the optimal calibration parameters. A new optimization method to estimate the calibration parameters for an ultrasound (US) probe is developed.

  8. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  9. Approach to optimal care at end of life.

    PubMed

    Nichols, K J

    2001-10-01

    At no other time in any patient's life is the team approach to care more important than at the end of life. The demands and challenges of end-of-life care (ELC) tax all physicians at some point. There is no other profession that is charged with this ultimate responsibility. No discipline in medicine is immune to the issues of end-of-life care except perhaps, ironically, pathology. This presentation addresses the issues, options, and challenges of providing optimal care at the end of life. It looks at the principles of ELC, barriers to good ELC, and what patients and families expect from ELC. Barriers to ELC include financial restrictions, inadequate care-givers, community support, legal issues, legislative issues, training needs, coordination of care, hospice care, and transitions for the patients and families. The legal aspects of physician-assisted suicide is presented as well as the approach of the American Osteopathic Association to ensure better education for physicians in the principles of ELC.

  10. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  11. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  12. Optimization of Peltier current lead for applied superconducting systems with optimum combination of cryo-stages

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshio; Emoto, Masahiko; Watanabe, Hirofumi; Hamabe, Makoto; Sun, Jian; Ivanov, Yury; Yamaguchi, Satarou

    2012-06-01

    The reduction of electric power consumption of the cryo-cooler during the working conditions of applied superconducting systems is important, as superconductivity can only be stored at low temperature and the power required for the cooling determines the efficiency of the systems employed. Use of Peltier current leads (PCLs) represents one key solution to effect heat load reduction on the terminals in systems. On the other hand, the performance of cryo-coolers generally increases as the temperature increases given the higher Carnot efficiency. Therefore, combination with suitable mid-stage temperatures represents one possible approach since the thermal anchor can enhance the performance of the system by reducing the electric power consumption of the cryo-coolers. In this paper, we discuss this possibility utilizing an advanced configuration of PCL with a commercially available high temperature cooler. Over 50% enhancement of the performance is estimated.

  13. A fast process development flow by applying design technology co-optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chieh; Yeh, Shin-Shing; Ou, Tsong-Hua; Lin, Hung-Yu; Mai, Yung-Ching; Lin, Lawrence; Lai, Jun-Cheng; Lai, Ya Chieh; Xu, Wei; Hurat, Philippe

    2017-03-01

    Beyond 40 nm technology node, the pattern weak points and hotspot types increase dramatically. The typical patterns for lithography verification suffers huge turn-around-time (TAT) to handle the design complexity. Therefore, in order to speed up process development and increase pattern variety, accurate design guideline and realistic design combinations are required. This paper presented a flow for creating a cell-based layout, a lite realistic design, to early identify problematic patterns which will negatively affect the yield. A new random layout generating method, Design Technology Co-Optimization Pattern Generator (DTCO-PG), is reported in this paper to create cell-based design. DTCO-PG also includes how to characterize the randomness and fuzziness, so that it is able to build up the machine learning scheme which model could be trained by previous results, and then it generates patterns never seen in a lite design. This methodology not only increases pattern diversity but also finds out potential hotspot preliminarily. This paper also demonstrates an integrated flow from DTCO pattern generation to layout modification. Optical Proximity Correction, OPC and lithographic simulation is then applied to DTCO-PG design database to detect hotspots and then hotspots or weak points can be automatically fixed through the procedure or handled manually. This flow benefits the process evolution to have a faster development cycle time, more complexity pattern design, higher probability to find out potential hotspots in early stage, and a more holistic yield ramping operation.

  14. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  15. Applying and testing the conveniently optimized enzyme mismatch cleavage method to clinical DNA diagnosis.

    PubMed

    Niida, Yo; Kuroda, Mondo; Mitani, Yusuke; Okumura, Akiko; Yokoi, Ayano

    2012-11-01

    Establishing a simple and effective mutation screening method is one of the most compelling problems with applying genetic diagnosis to clinical use. Because there is no reliable and inexpensive screening system, amplifying by PCR and performing direct sequencing of every coding exon is the gold standard strategy even today. However, this approach is expensive and time consuming, especially when gene size or sample number is large. Previously, we developed CEL nuclease mediated heteroduplex incision with polyacrylamide gel electrophoresis and silver staining (CHIPS) as an ideal simple mutation screening system constructed with only conventional apparatuses and commercially available reagents. In this study, we evaluated the utility of CHIPS technology for genetic diagnosis in clinical practice by applying this system to screening for the COL2A1, WRN and RPS6KA3 mutations in newly diagnosed patients with Stickler syndrome (autosomal dominant inheritance), Werner syndrome (autosomal recessive inheritance) and Coffin-Lowry syndrome (X-linked inheritance), respectively. In all three genes, CHIPS detected all DNA variations including disease causative mutations within a day. Direct sequencing of all coding exons of these genes confirmed 100% sensitivity and specificity. We demonstrate high sensitivity, high cost performance and reliability of this simple system, with compatibility to all inheritance modes. Because of its low technology, CHIPS is ready to use and potentially disseminate to any laboratories in the world.

  16. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-11-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.

  17. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    PubMed Central

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-01-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy. PMID:27886244

  18. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  19. Three-dimensional electrical impedance tomography: a topology optimization approach.

    PubMed

    Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli

    2008-02-01

    Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.

  20. A Pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternatives

    NASA Astrophysics Data System (ADS)

    Rousis, Damon A.

    The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve

  1. Optimal shape design of aerodynamic configurations: A Newton-Krylov approach

    NASA Astrophysics Data System (ADS)

    Nemec, Marian

    Optimal shape design of aerodynamic configurations is a challenging problem due to the nonlinear effects of complex flow features such as shock waves, boundary layers, and separation. A Newton-Krylov algorithm is presented for aerodynamic design using gradient-based numerical optimization. The flow is governed by the two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized on multi-block structured grids. The discrete-adjoint method is applied to compute the objective function gradient. The adjoint equation is solved using the preconditioned generalized minimal residual (GMRES) method. A novel preconditioner is introduced, and together with a complete differentiation of the discretized Navier-Stokes and turbulence model equations, this results in an accurate and efficient evaluation of the gradient. The gradient is obtained in just one-fifth to one-half of the time required to converge a flow solution. Furthermore, fast flow solutions are obtained using the same preconditioned GMRES method in conjunction with an inexact-Newton approach. Optimization constraints are enforced through a penalty formulation, and the resulting unconstrained problem is solved via a quasi-Newton method. The performance of the new algorithm is demonstrated for several design examples that include lift enhancement, where the optimal position of a flap is determined within a high-lift configuration, lift-constrained drag minimization at multiple transonic operating points, and the computation of a Pareto front based on competing objectives. In all examples, the gradient is reduced by several orders of magnitude, indicating that a local minimum has been obtained. Overall, the results show that the new algorithm is among the fastest presently available for aerodynamic shape optimization and provides an effective approach for practical aerodynamic design.

  2. A multiobjective optimization approach to the operation and investment of the national energy and transportation systems

    NASA Astrophysics Data System (ADS)

    Ibanez, Eduardo

    Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.

  3. Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact

    NASA Astrophysics Data System (ADS)

    Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo

    To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.

  4. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  6. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2016-07-20

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  7. Optimal air quality policies and health: a multi-objective nonlinear approach.

    PubMed

    Relvas, Helder; Miranda, Ana Isabel; Carnevale, Claudio; Maffeis, Giuseppe; Turrini, Enrico; Volta, Marialuisa

    2017-05-01

    The use of modelling tools to support decision-makers to plan air quality policies is now quite widespread in Europe. In this paper, the Regional Integrated Assessment Tool (RIAT+), which was designed to support policy-maker decision on optimal emission reduction measures to improve air quality at minimum costs, is applied to the Porto Urban Area (Portugal). In addition to technological measures, some local measures were included in the optimization process. Case study results are presented for a multi-objective approach focused on both NO2 and PM10 control measures, assuming equivalent importance in the optimization process. The optimal set of air quality measures is capable to reduce simultaneously the annual average concentrations values of PM10 and NO2 in 1.7 and 1.0 μg/m(3), respectively. This paper illustrates how the tool could be used to prioritize policy objectives and help making informed decisions about reducing air pollution and improving public health.

  8. A market-based optimization approach to sensor and resource management

    NASA Astrophysics Data System (ADS)

    Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.

    2006-05-01

    Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.

  9. Optimal design of implants for magnetically mediated hyperthermia: A wireless power transfer approach

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-09-01

    In magnetically mediated hyperthermia (MMH), an externally applied alternating magnetic field interacts with a mediator (such as a magnetic nanoparticle or an implant) inside the body to heat up the tissue in its proximity. Producing heat via induced currents in this manner is strikingly similar to wireless power transfer (WPT) for implants, where power is transferred from a transmitter outside of the body to an implanted receiver, in most cases via magnetic fields as well. Leveraging this analogy, a systematic method to design MMH implants for optimal heating efficiency is introduced, akin to the design of WPT systems for optimal power transfer efficiency. This paper provides analytical formulas for the achievable heating efficiency bounds as well as the optimal operating frequency and the implant material. Multiphysics simulations validate the approach and further demonstrate that optimization with respect to maximum heating efficiency is accompanied by minimizing heat delivery to healthy tissue. This is a property that is highly desirable when considering MMH as a key component or complementary method of cancer treatment and other applications.

  10. Molecular tailoring approach for geometry optimization of large molecules: energy evaluation and parallelization strategies.

    PubMed

    Ganesh, V; Dongare, Rameshwar K; Balanarayan, P; Gadre, Shridhar R

    2006-09-14

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including alpha-tocopherol, taxol, gamma-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  11. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  12. On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts

    NASA Astrophysics Data System (ADS)

    Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal

    2011-05-01

    In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].

  13. Optimization of preparation of chitosan-coated iron oxide nanoparticles for biomedical applications by chemometrics approaches

    NASA Astrophysics Data System (ADS)

    Honary, Soheila; Ebrahimi, Pouneh; Rad, Hossein Asgari; Asgari, Mahsa

    2013-08-01

    Functionalized magnetic nanoparticles are used in several biomedical applications, such as drug delivery, magnetic cell separation, and magnetic resonance imaging. Size and surface properties of iron oxide nanoparticles are the two important factors which could dramatically affect the nanoparticle efficiency as well as their stability. In this study, the chemometrics approach was applied to optimize the coating process of iron oxide nanoparticles. To optimize the size of nanoparticles, the effect of two experimental parameters on size was investigated by means of multivariate analysis. The factors considered were chitosan molecular weight and chitosan-to-tripolyphosphate concentration ratio. The experiments were performed according to face-centered cube central composite response surface design. A second-order regression model was obtained which characterized by both descriptive and predictive abilities. The method was optimized with respect to the percent of Z average diameter's increasing after coating as response. It can be concluded that experimental design provides a suitable means of optimizing and testing the robustness of iron oxide nanoparticle coating method.

  14. Real-time, large scale optimization of water network systems using a subdomain approach.

    SciTech Connect

    van Bloemen Waanders, Bart Gustaaf; Biegler, Lorenz T.; Laird, Carl Damon

    2005-03-01

    Certain classes of dynamic network problems can be modeled by a set of hyperbolic partial differential equations describing behavior along network edges and a set of differential and algebraic equations describing behavior at network nodes. In this paper, we demonstrate real-time performance for optimization problems in drinking water networks. While optimization problems subject to partial differential, differential, and algebraic equations can be solved with a variety of techniques, efficient solutions are difficult for large network problems with many degrees of freedom and variable bounds. Sequential optimization strategies can be inefficient for this problem due to the high cost of computing derivatives with respect to many degrees of freedom. Simultaneous techniques can be more efficient, but are difficult because of the need to solve a large nonlinear program; a program that may be too large for current solver. This study describes a dynamic optimization formulation for estimating contaminant sources in drinking water networks, given concentration measurements at various network nodes. We achieve real-time performance by combining an efficient large-scale nonlinear programming algorithm with two problem reduction techniques. D Alembert's principle can be applied to the partial differential equations governing behavior along the network edges (distribution pipes). This allows us to approximate the time-delay relationships between network nodes, removing the need to discretize along the length of the pipes. The efficiency of this approach alone, however, is still dependent on the size of the network and does not scale indefinitely to larger network models. We further reduce the problem size with a subdomain approach and solve smaller inversion problems using a geographic window around the area of contamination. We illustrate the effectiveness of this overall approach and these reduction techniques on an actual metropolitan water network model.

  15. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  16. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  17. A multiscale optimization approach to detect exudates in the macula.

    PubMed

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  18. [Niacin--an additive therapeutic approach for optimizing lipid profile].

    PubMed

    Wieneke, Heinrich; Schmermund, Axel; Erbel, Raimund

    2005-04-15

    Large interventional studies have shown that the reduction of total cholesterol and low-density lipoprotein cholesterol (LDL-C) is one of the cornerstones in the prevention of coronary artery disease. However, in up to 40% of patients the recommended target of LDL-C is not reached with a monotherapy. Furthermore, risk stratification only by LDL-C disregards a substantial number of patients with dyslipidemia with increased triglycerides and decreased high-density lipoprotein cholesterol (HDL-C). In consequence, niacin has gained attention as a component of a combined therapeutic approach in patients with dyslipidemia. Niacin substantially increases HDL-C and decreases triglycerides, LDL-C and lipoprotein (a). By this mechanism of action niacin exhibited, in combination with statins or bile acid-binding resins, favorable effects on the incidence of cardiovascular events in selected patients. Side effects like flush and hepatotoxicity seem to be in part dependent on the niacin formulations used. However, niacin has been shown to be a well-tolerated and safe therapy in controlled studies. On the basis of current data niacin should be considered a valuable therapy component in patients with dyslipidemia, in which a monotherapy fails to optimize an increased risk of coronary artery disease.

  19. Constrained nonlinear optimization approaches to color-signal separation.

    PubMed

    Chang, P R; Hsieh, T H

    1995-01-01

    Separating a color signal into illumination and surface reflectance components is a fundamental issue in color reproduction and constancy. This can be carried out by minimizing the error in the least squares (LS) fit of the product of the illumination and the surface spectral reflectance to the actual color signal. When taking in account the physical realizability constraints on the surface reflectance and illumination, the feasible solutions to the nonlinear LS problem should satisfy a number of linear inequalities. Four distinct novel optimization algorithms are presented to employ these constraints to minimize the nonlinear LS fitting error. The first approach, which is based on Ritter's superlinear convergent method (Luengerger, 1980), provides a computationally superior algorithm to find the minimum solution to the nonlinear LS error problem subject to linear inequality constraints. Unfortunately, this gradient-like algorithm may sometimes be trapped at a local minimum or become unstable when the parameters involved in the algorithm are not tuned properly. The remaining three methods are based on the stable and promising global minimizer called simulated annealing. The annealing algorithm can always find the global minimum solution with probability one, but its convergence is slow. To tackle this, a cost-effective variable-separable formulation based on the concept of Golub and Pereyra (1973) is adopted to reduce the nonlinear LS problem to be a small-scale nonlinear LS problem. The computational efficiency can be further improved when the original Boltzman generating distribution of the classical annealing is replaced by the Cauchy distribution.

  20. A systematic approach: optimization of healthcare operations with knowledge management.

    PubMed

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  1. Novel local rules of cellular automata applied to topology and size optimization

    NASA Astrophysics Data System (ADS)

    Bochenek, Bogdan; Tajs-Zielińska, Katarzyna

    2012-01-01

    Cellular automata are mathematical idealization of physical systems in which the design domains are divided into lattices of cells, states of which are updated synchronously in discrete time steps according to some local rules. The principle of the cellular automata is that global behaviour of the system is governed by cells that only interact with their neighbours. Because of its simplicity and versatility the method has been found as useful tool for structural design, especially that cellular automata methodology can be adopted for both optimal sizing and topology optimization. This article presents the application of the cellular automata concept to topology optimization of plane elastic structures. As to the optimal sizing, the design of columns exposed to loss of stability is also discussed. A new local update rule is proposed, selected optimal design problems are formulated, and finally the article is illustrated by results of numerical optimization.

  2. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  3. Genetic-Algorithm-based Light-Curve Optimization Applied to Observations of the W Ursae Majoris Star BH Cassiopeiae

    NASA Astrophysics Data System (ADS)

    Metcalfe, Travis S.

    1999-05-01

    I have developed a procedure utilizing a genetic-algorithm (GA) based optimization scheme to fit the observed light curves of an eclipsing binary star with a model produced by the Wilson-Devinney (W-D) code. The principal advantages of this approach are the global search capability and the objectivity of the final result. Although this method can be more efficient than some other comparably global search techniques, the computational requirements of the code are still considerable. I have applied this fitting procedure to my observations of the W UMa type eclipsing binary BH Cassiopeiae. An analysis of V-band CCD data obtained in 1994-1995 from Steward Observatory and U- and B-band photoelectric data obtained in 1996 from McDonald Observatory provided three complete light curves to constrain the fit. In addition, radial velocity curves obtained in 1997 from McDonald Observatory provided a direct measurement of the system mass ratio to restrict the search. The results of the GA-based fit are in excellent agreement with the final orbital solution obtained with the standard differential corrections procedure in the W-D code.

  4. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  5. Can the Usage-Based Approach to Language Development be Applied to Analysis of Developmental Stuttering?

    PubMed Central

    Savage, C.; Lieven, E.

    2008-01-01

    The usage-based approach to language development suggests that children initially build up their language through very concrete constructions based around individual words or frames on the basis of the speech they hear and use. These constructions gradually become more general and more abstract during the third and fourth year of life. We outline this approach and suggest that it may be applied to problems of fluency control in early child language development. PMID:18259585

  6. Applying a Methodological Approach to the Development of a Natural Interaction System

    NASA Astrophysics Data System (ADS)

    Del Valle-Agudo, David; Rivero-Espinosa, Jessica; Calle-Gómez, Francisco Javier; Cuadra-Fernández, Dolores

    This work describes the methodology used to design a Natural Interaction System for guiding services. A national research project was the framework where the approach was applied. The aim of that system is interacting with clients of a hotel for providing diverse services. Apart from the description of the methodology, a case study is added to the paper in order to outline strengths of the approach, and limits that should lead to future research.

  7. A Technical and Economic Optimization Approach to Exploring Offshore Renewable Energy Development in Hawaii

    SciTech Connect

    Larson, Kyle B.; Tagestad, Jerry D.; Perkins, Casey J.; Oster, Matthew R.; Warwick, M.; Geerlofs, Simon H.

    2015-09-01

    This study was conducted with the support of the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office (WWPTO) as part of ongoing efforts to minimize key risks and reduce the cost and time associated with permitting and deploying ocean renewable energy. The focus of the study was to discuss a possible approach to exploring scenarios for ocean renewable energy development in Hawaii that attempts to optimize future development based on technical, economic, and policy criteria. The goal of the study was not to identify potentially suitable or feasible locations for development, but to discuss how such an approach may be developed for a given offshore area. Hawaii was selected for this case study due to the complex nature of the energy climate there and DOE’s ongoing involvement to support marine spatial planning for the West Coast. Primary objectives of the study included 1) discussing the political and economic context for ocean renewable energy development in Hawaii, especially with respect to how inter-island transmission may affect the future of renewable energy development in Hawaii; 2) applying a Geographic Information System (GIS) approach that has been used to assess the technical suitability of offshore renewable energy technologies in Washington, Oregon, and California, to Hawaii’s offshore environment; and 3) formulate a mathematical model for exploring scenarios for ocean renewable energy development in Hawaii that seeks to optimize technical and economic suitability within the context of Hawaii’s existing energy policy and planning.

  8. Cryosurgery--a putative approach to molecular-based optimization.

    PubMed

    Baust, John G; Gage, Andrew A; Clarke, Dominic; Baust, John M; Van Buskirk, Robert

    2004-04-01

    Cryosurgery must be performed in a manner that produces a predictable response in an appropriate volume of tissue. In present-day clinical practice, that goal is not always achieved. Concerns with cryosurgical techniques in cancer therapy focus in part on the incidence of recurrent disease in the treated site, which is commonly approximately 20-40% in metastatic liver tumors, and prostate cancers. Whether the cause of this failure is disease-based or technique related, cryosurgery for cancer commonly needs the support of adjunctive therapy in the form of anti-cancer drugs or radiotherapy to increase the rate of cell death in the peripheral zone of the therapeutic lesion where cell survival is in balance for several days post-treatment. Recent evidence has identified a third mechanism of cell death associated with cryosurgery. This mechanism, apoptosis or gene regulated cell death, is additive with both the direct ice-related cell damage that occurs during the operative freeze-thaw intervals and coagulative necrosis that occurs over days post-treatment. In this manuscript we discuss, through a combination of literature review and new data, the combined roles of these distinct modes of cell death in a prostate and colorectal cancer. Data are presented suggesting that sub-freezing temperatures, when sequentially applied with low dose chemotherapy, may provide improved cancer cell death in the freeze zone periphery. Since the mechanism of action of most common chemotherapeutic agents is to initiate apoptosis in cancer cells, the observation that sub-freezing exposures yields a similar effect provides a possible route toward molecular-based procedural optimization to improve therapeutic outcome.

  9. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-12-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  10. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  11. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    NASA Astrophysics Data System (ADS)

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-01

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  12. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  13. a Multivariate Approach to Optimize Subseafloor Observatory Designs

    NASA Astrophysics Data System (ADS)

    Lado Insua, T.; Moran, K.; Kulin, I.; Farrington, S.; Newman, J. B.; Morgan, S.

    2012-12-01

    Long-term monitoring of the subseafloor has become a more common practice in the last few decades. Systems such as the Circulation Obviation Retrofit Kit (CORK) have been used since the 1970s to provide the scientific community with time series measurements of geophysical properties below the seafloor and in the latest versions with pore water sampling over time. The Simple Cabled Instrument for Measuring Parameters In-Situ (SCIMPI) is a new observatory instrument designed to study dynamic processes in the sub-seabed. SCIMPI makes time series measurements of temperature, pressure and electrical resistivity at a series of depths in the sub-seafloor, tailored for site-specific scientific objectives. SCIMPI's modular design enables this type of site-specific configuration, based on the study goals, combined with the sub-seafloor characteristics. The instrument is designed to take measurements in dynamic environments. After four years in development, SCIMPI is scheduled for first deployment on the Cascadia Margin within the NEPTUNE Canada observatory network. SCIMPI's flexible modular design simplifies the deployment and reduces the cost of measurements of physical properties. SCIMPI is expected to expand subseafloor observations into softer sediments and multiple depth intervals. In any observation system, the locations and number of sensors is a compromise between scientific objectives and cost. The subseafloor sensor positions within an observatory borehole have been determined in the past by identifying the major lithologies or major flux areas, based on individual analysis of the physical properties and logging measurements of the site. Here we present a multivariate approach for identifying the most significant depth intervals to instrument for long-term subseafloor observatories. Where borehole data are available (wireline logging, logging while drilling, physical properties and chemistry measurements), this approach will optimize the locations using an unbiased

  14. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  15. On the use of computation optimization opportunities in computer technologies for applied and computational mathematics problems with prescribed quality characteristics

    NASA Astrophysics Data System (ADS)

    Babich, M. D.; Zadiraka, V. K.; Lyudvichenko, V. A.; Sergienko, I. V.

    2010-12-01

    The use of various opportunities for computation optimization in computer technologies for applied and computational mathematics problems with prescribed quality characteristics is investigated. More precisely, the choice and determination of computational resources and methods of their efficient use for finding an approximate solution of problems up to prescribed accuracy in a limited amount of processor time are investigated.

  16. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy.

  17. Applying the Systems Approach to Curriculum Development in the Science Classroom.

    ERIC Educational Resources Information Center

    Boblick, John M.

    Described is a method by which a classroom teacher may apply the systems approach to the development of the instructional segments which he uses in his daily teaching activities. The author proposes a three-dimensional curriculum design model and discusses its main features. The basic points which characterize the application of the systems…

  18. Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

    ERIC Educational Resources Information Center

    Lin, Su-ching; Wu, Ming-sui

    2016-01-01

    This study was the first year of a two-year project which applied a program theory-driven approach to evaluating the impact of teachers' professional development interventions on students' learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of…

  19. Optimization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach.

    PubMed

    Fu, Zhibiao; Leighton, Julie; Cheng, Aili; Appelbaum, Edward; Aon, Juan C

    2012-07-01

    Various approaches have been applied to optimize biological product fermentation processes and define design space. In this article, we present a stepwise approach to optimize a Saccharomyces cerevisiae fermentation process through risk assessment analysis, statistical design of experiments (DoE), and multivariate Bayesian predictive approach. The critical process parameters (CPPs) were first identified through a risk assessment. The response surface for each attribute was modeled using the results from the DoE study with consideration given to interactions between CPPs. A multivariate Bayesian predictive approach was then used to identify the region of process operating conditions where all attributes met their specifications simultaneously. The model prediction was verified by twelve consistency runs where all batches achieved broth titer more than 1.53 g/L of broth and quality attributes within the expected ranges. The calculated probability was used to define the reliable operating region. To our knowledge, this is the first case study to implement the multivariate Bayesian predictive approach to the process optimization for the industrial application and its corresponding verification at two different production scales. This approach can be extended to other fermentation process optimizations and reliable operating region quantitation.

  20. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  1. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  2. Efficient global optimization applied to wind tunnel evaluation-based optimization for improvement of flow control by plasma actuators

    NASA Astrophysics Data System (ADS)

    Kanazaki, Masahiro; Matsuno, Takashi; Maeda, Kengo; Kawazoe, Hiromitsu

    2015-09-01

    A kriging-based genetic algorithm called efficient global optimization (EGO) was employed to optimize the parameters for the operating conditions of plasma actuators. The aerodynamic performance was evaluated by wind tunnel testing to overcome the disadvantages of time-consuming numerical simulations. The proposed system was used on two design problems to design the power supply for a plasma actuator. The first case was the drag minimization problem around a semicircular cylinder. In this case, the inhibitory effect of flow separation was also observed. The second case was the lift maximization problem around a circular cylinder. This case was similar to the aerofoil design, because the circular cylinder has potential to work as an aerofoil owing to the control of the flow circulation by the plasma actuators with four design parameters. In this case, applicability to the multi-variant design problem was also investigated. Based on these results, optimum designs and global design information were obtained while drastically reducing the number of experiments required compared to a full factorial experiment.

  3. The Contribution of Applied Social Sciences to Obesity Stigma-Related Public Health Approaches

    PubMed Central

    Bombak, Andrea E.

    2014-01-01

    Obesity is viewed as a major public health concern, and obesity stigma is pervasive. Such marginalization renders obese persons a “special population.” Weight bias arises in part due to popular sources' attribution of obesity causation to individual lifestyle factors. This may not accurately reflect the experiences of obese individuals or their perspectives on health and quality of life. A powerful role may exist for applied social scientists, such as anthropologists or sociologists, in exploring the lived and embodied experiences of this largely discredited population. This novel research may aid in public health intervention planning. Through these studies, applied social scientists could help develop a nonstigmatizing, salutogenic approach to public health that accurately reflects the health priorities of all individuals. Such an approach would call upon applied social science's strengths in investigating the mundane, problematizing the “taken for granted” and developing emic (insiders') understandings of marginalized populations. PMID:24782921

  4. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A new multi criteria classification approach in a multi agent system applied to SEEG analysis

    PubMed Central

    Kinie, Abel; Ndiaye, Mamadou Lamine L.; Montois, Jean-Jacques; Jacquelet, Yann

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals. PMID:18002381

  6. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  7. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  8. Antigen identification starting from the genome: a "Reverse Vaccinology" approach applied to MenB.

    PubMed

    Palumbo, Emmanuelle; Fiaschi, Luigi; Brunelli, Brunella; Marchi, Sara; Savino, Silvana; Pizza, Mariagrazia

    2012-01-01

    Most of the vaccines available today, albeit very effective, have been developed using traditional "old-style" methodologies. Technologies developed in recent years have opened up new perspectives in the field of vaccinology and novel strategies are now being used to design improved or new vaccines against infections for which preventive measures do not exist. The Reverse Vaccinology (RV) approach is one of the most powerful examples of biotechnology applied to the field of vaccinology for identifying new protein-based vaccines. RV combines the availability of genomic data, the analyzing capabilities of new bioinformatic tools, and the application of high throughput expression and purification systems combined with serological screening assays for a coordinated screening process of the entire genomic repertoire of bacterial, viral, or parasitic pathogens. The application of RV to Neisseria meningitidis serogroup B represents the first success of this novel approach. In this chapter, we describe how this revolutionary approach can be easily applied to any pathogen.

  9. Reference energy extremal optimization: a stochastic search algorithm applied to computational protein design.

    PubMed

    Zhang, Naigong; Zeng, Chen

    2008-08-01

    We adapt a combinatorial optimization algorithm, extremal optimization (EO), for the search problem in computational protein design. This algorithm takes advantage of the knowledge of local energy information and systematically improves on the residues that have high local energies. Power-law probability distributions are used to select the backbone sites to be improved on and the rotamer choices to be changed to. We compare this method with simulated annealing (SA) and motivate and present an improved method, which we call reference energy extremal optimization (REEO). REEO uses reference energies to convert a problem with a structured local-energy profile to one with more random profile, and extremal optimization proves to be extremely efficient for the latter problem. We show in detail the large improvement we have achieved using REEO as compared to simulated annealing and discuss a number of other heuristics we have attempted to date. 2008 Wiley Periodicals, Inc.

  10. Simultaneous optimization by neuro-genetic approach for analysis of plant materials by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Nunes, Lidiane Cristina; da Silva, Gilmare Antônia; Trevizan, Lilian Cristina; Santos Júnior, Dario; Poppi, Ronei Jesus; Krug, Francisco José

    2009-06-01

    A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macro-nutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic. A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 µs integration time gate, 1.1 µs delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem.

  11. Optimal multi-step collocation: application to the space-wise approach for GOCE data analysis

    NASA Astrophysics Data System (ADS)

    Reguzzoni, Mirko; Tselfes, Nikolaos

    2009-01-01

    Collocation is widely used in physical geodesy. Its application requires to solve systems with a dimension equal to the number of observations, causing numerical problems when many observations are available. To overcome this drawback, tailored step-wise techniques are usually applied. An example of these step-wise techniques is the space-wise approach to the GOCE mission data processing. The original idea of this approach was to implement a two-step procedure, which consists of first predicting gridded values at satellite altitude by collocation and then deriving the geo-potential spherical harmonic coefficients by numerical integration. The idea was generalized to a multi-step iterative procedure by introducing a time-wise Wiener filter to reduce the highly correlated observation noise. Recent studies have shown how to optimize the original two-step procedure, while the theoretical optimization of the full multi-step procedure is investigated in this work. An iterative operator is derived so that the final estimated spherical harmonic coefficients are optimal with respect to the Wiener-Kolmogorov principle, as if they were estimated by a direct collocation. The logical scheme used to derive this optimal operator can be applied not only in the case of the space-wise approach but, in general, for any case of step-wise collocation. Several numerical tests based on simulated realistic GOCE data are performed. The results show that adding a pre-processing time-wise filter to the two-step procedure of data gridding and spherical harmonic analysis is useful, in the sense that the accuracy of the estimated geo-potential coefficients is improved. This happens because, in its practical implementation, the gridding is made by collocation over local patches of data, while the observation noise has a time-correlation so long that it cannot be treated inside the patch size. Therefore, the multi-step operator, which is in theory equivalent to the two-step operator and to the

  12. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  13. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  14. Optimal control of wave-packets: a semiclassical approach

    NASA Astrophysics Data System (ADS)

    Darío Guerrero, Rubén; Arango, Carlos A.; Reyes, Andrés

    2014-02-01

    We studied the optimal quantum control of a molecular rotor in tilted laser fields using the time-sliced Herman-Kluk propagator for the evaluation of the optimal pulse and the light-dipole interaction as the control mechanism. The proposed methodology was used to study the effects of an optimal pulse on the evolution of a wave-packet in a double-well potential and in the effective potential of a molecular rotor in a collinear tilted fields setup. The amplitude and frequency of the control pulse were obtained in such a way that the transition probability between two rotational wave-packets was maximised.

  15. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  16. A decision theoretic approach to optimization of multiple testing procedures.

    PubMed

    Lisovskaja, Vera; Burman, Carl-Fredrik

    2015-01-01

    This paper focuses on the concept of optimizing a multiple testing procedure (MTP) with respect to a predefined utility function. The class of Bonferroni-based closed testing procedures, which includes, for example, (weighted) Holm, fallback, gatekeeping, and recycling/graphical procedures, is used in this context. Numerical algorithms for calculating expected utility for some MTPs in this class are given. The obtained optimal procedures, as well as the gain resulting from performing an optimization are then examined in a few, but informative, examples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Modeling and multi-response optimization of pervaporation of organic aqueous solutions using desirability function approach.

    PubMed

    Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A

    2009-08-15

    The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.

  18. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  19. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  20. Geometric approach to optimal nonequilibrium control: Minimizing dissipation in nanomagnetic spin systems

    NASA Astrophysics Data System (ADS)

    Rotskoff, Grant M.; Crooks, Gavin E.; Vanden-Eijnden, Eric

    2017-01-01

    Optimal control of nanomagnets has become an urgent problem for the field of spintronics as technological tools approach thermodynamically determined limits of efficiency. In complex, fluctuating systems, such as nanomagnetic bits, finding optimal protocols is challenging, requiring detailed information about the dynamical fluctuations of the controlled system. We provide a physically transparent derivation of a metric tensor for which the length of a protocol is proportional to its dissipation. This perspective simplifies nonequilibrium optimization problems by recasting them in a geometric language. We then describe a numerical method, an instance of geometric minimum action methods, that enables computation of geodesics even when the number of control parameters is large. We apply these methods to two models of nanomagnetic bits: a Landau-Lifshitz-Gilbert description of a single magnetic spin controlled by two orthogonal magnetic fields, and a two-dimensional Ising model in which the field is spatially controlled. These calculations reveal nontrivial protocols for bit erasure and reversal, providing important, experimentally testable predictions for ultra-low-power computing.

  1. Optimization of Antioxidant Potential of Penicillium granulatum Bainier by Statistical Approaches

    PubMed Central

    Chandra, Priyanka; Arora, Daljit Singh

    2012-01-01

    A three-step optimization strategy which includes one-factor-at-a-time classical method and different statistical approaches (Plackett-Burman design and response surface methodology) that were applied to optimize the antioxidant potential of Penicillium granulatum. Antioxidant activity was assayed by different procedures and compared with total phenolic content. Primarily, different carbon and nitrogen sources were screened by classical methods, which revealed sucrose and NaNO3 to be the most suitable. In second step, Plackett-Burman design also supported sucrose and NaNO3 to be the most significant. In third step, response surface analysis showed 4.5% sucrose, 0.1% NaNO3, and incubation temperature of 25°C to be the optimal conditions. Under these conditions, the antioxidant potential assayed through different procedures was 78.2%, 70.1%, and 78.9% scavenging effect for DPPH radical, ferrous ion, and nitric oxide ion, respectively. The reducing power showed an absorbance of 1.6 with 68.5% activity for FRAP assay. PMID:23724323

  2. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.

  3. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum

  4. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  5. A functional approach to geometry optimization of complex systems

    NASA Astrophysics Data System (ADS)

    Maslen, P. E.

    A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.

  6. Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.

    2014-12-01

    A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.

  7. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes.

  8. The effective energy transformation scheme as a special continuation approach to global optimization with application to molecular conformation

    SciTech Connect

    Wu, Zhijun

    1996-11-01

    This paper discusses a generalization of the function transformation scheme for global energy minimization applied to the molecular conformation problem. A mathematical theory for the method as a special continuation approach to global optimization is established. We show that the method can transform a nonlinear objective function into a class of gradually deformed, but {open_quote}smoother{close_quote} or {open_quotes}easier{close_quote} functions. An optimization procedure can then be applied to the new functions successively, to trace their solutions back to the original function. Two types of transformation are defined: isotropic and anisotropic. We show that both transformations can be applied to a large class of nonlinear partially separable functions including energy functions for molecular conformation. Methods to compute the transformation for these functions are given.

  9. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  10. New Optimality Approach for Photosynthetic Parameterization in Terrestrial Biosphere Models: Development and Testing of VIC-VEO

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Ramirez, J.

    2016-12-01

    Photosynthesis is intricately linked to the carbon, energy, and water cycles of our planet, and yet is commonly estimated in terrestrial biosphere models using grossly simplified descriptions and parameterizations. As our climate changes, vegetation both adapts and acclimates in ways not captured in these traditional modeling schemes. One of the most ubiquitous models of photosynthesis is the Farquhar, von Caemmerer, and Berry model, which considers at a minimum, two systems of so-called light and dark reactions. Critical parameters for each of these systems include the maximum rate of electron transport (Jmax), and the maximum rate of carboxylation (Vcmax), respectively. Although critical, these parameters are commonly either fixed at a reference temperature using estimates from literature, or follow simplified rules independent of climate. Here, we consider a new optimality approach allocating available nitrogen within the leaf such that the expectation of carbon assimilation is maximized. Further, the new approach responds dynamically to the environment, including non-stomatal down-regulation during water shortages. This new approach is discussed along with a case study replicating seasonal variability of photosynthetic capacity. Further, we introduce the VIC-VEO (VEgetal Optimality) model that implements the photosynthetic optimality approach, which is then applied across the Colorado River Basin in a water supply vulnerability case study. Results of this study show significant differences in both assimilation and transpiration between static and dynamic parameterizations of the photosynthetic system, emphasizing the need for more robust photosynthetic parameterization schemes in contemporary terrestrial biosphere models, such as utilizing optimality approaches.

  11. Applying genetic algorithms to space optimization decision of farmland bio-energy intensive utilization

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Xia; Zhuo, Li; Tao, Haiyan; Xia, Lihua

    2008-10-01

    The development of bio-energy intensive utilization of farmland is to solve China's emerging issues related to energy and environment in an important way. Given the spatial distribution of bio-energy is scattered, not continuous, the intensive utilization of farmland bio-energy is different from that of the traditional energy, i.e. coal, oil, natural gas, etc.. The estimation of biomass, the spatial distribution and the space optimization study are the key for practical applications to develop bio-energy intensive utilization. Based on a case study conducted in Guangdong province, China, this paper provides a framework that estimates available biomass and analyzes its distribution pattern in the established NPP model quickly; it also builds the primary collection ranges by Thiessen polygon in different scales. The application of Genetic Algorithms (GA) to the optimization and space decision of bio-energy intensive utilization is one of the key deliveries. The result shows that GA and GIS integration model for resolving domain-point supply and field demand has obvious advantages. A key finding presents that the model simulation results have enormous impact by the MUAP. When Thiessen polygon scale with 10 KM proximal threshold is established as the primary collecting scope of bioenergy, the fitness value can be maximized in the optimized process. In short, the optimized model can provide an effective solution to farmland bio-energy spatial optimization.

  12. D-optimal design applied to binding saturation curves of an enkephalin analog in rat brain

    SciTech Connect

    Verotta, D.; Petrillo, P.; La Regina, A.; Rocchetti, M.; Tavani, A.

    1988-01-01

    The D-optimal design, a minimal sample design that minimizes the volume of the joint confidence region for the parameters, was used to evaluate binding parameters in a saturation curve with a view to reducing the number of experimental points without loosing accuracy in binding parameter estimates. Binding saturation experiments were performed in rat brain crude membrane preparations with the opioid ..mu..-selective ligand (/sup 3/H)-(D-Ala/sup 2/, MePhe/sup 4/, Gly-ol/sup 5/)enkephalin (DAGO), using a sequential procedure. The first experiment consisted of a wide-range saturation curve, which confirmed that (/sup 3/H)-DAGO binds only one class of specific sites and non-specific sites, and gave information on the experimental range and a first estimate of binding affinity (K/sub a/), capacity (B/sub max/) and non-specific constant (k). On this basis the D-optimal design was computed and sequential experiments were performed each covering a wide-range traditional saturation curve, the D-optimal design and a splitting of the D-optimal design with the addition of 2 points (+/- 15% of the central point). No appreciable differences were obtained with these designs in parameter estimates and their accuracy. Thus, sequential experiments based on D-optimal design seem a valid method for accurate determination of binding parameters, using far fewer points with no loss in parameter estimation accuracy. 25 references, 2 figures, 3 tables.

  13. Bioassay case study applying the maximin D-optimal design algorithm to the four-parameter logistic model.

    PubMed

    Coffey, Todd

    2015-01-01

    Cell-based potency assays play an important role in the characterization of biopharmaceuticals but they can be challenging to develop in part because of greater inherent variability than other analytical methods. Our objective is to select concentrations on a dose-response curve that will enhance assay robustness. We apply the maximin D-optimal design concept to the four-parameter logistic (4 PL) model and then derive and compute the maximin D-optimal design for a challenging bioassay using curves representative of assay variation. The selected concentration points from this 'best worst case' design adequately fit a variety of 4 PL shapes and demonstrate improved robustness.

  14. Inverse computational feedback optimization imaging applied to time varying changes in a homogeneous structure.

    PubMed

    Evans, Daniel J; Manwaring, Mark L; Soule, Terence

    2008-01-01

    The technique of inverse computational feedback optimization imaging allows for the imaging of varying tissue without the continuous need of a complex imaging systems such as an MRI or CT. Our method trades complex imaging equipment for computing power. The objective is to use a baseline scan from an imaging system along with finite element method computational software to calculate the physically measurable parameters (such as voltage or temperature). As the physically measurable parameters change the computational model is iteratively run until it matches the measured values. Optimization routines are implemented to accelerate the process of finding the new values. Presented is a computational model demonstrating how the inverse imaging technique would work with a simple homogeneous sample with a circular structure. It demonstrates the ability to locate an object with only a few point measurements. The presented computational model uses swarm optimization techniques to help find the object location from the measured data (which in this case is voltage).

  15. A niching genetic algorithm applied to optimize a SiC-bulk crystal growth system

    NASA Astrophysics Data System (ADS)

    Su, Juan; Chen, Xuejiang; Li, Yuan; Pons, Michel; Blanquet, Elisabeth

    2017-06-01

    A niching genetic algorithm (NGA) was presented to optimize a SiC-bulk crystal growth system by PVT. The NGA based on clearing mechanism and its combination method with heat transfer model for SiC crystal growth were described in details. Then three inverse problems for optimization of growth system were carried out by NGA. Firstly, the radius of blind hole was optimized to decrease the radial temperature gradient along the substrate while the center temperature on the surface of substrate is fixed at 2500 K. Secondly, insulation materials with anisotropic thermal conductivities were selected to obtain much higher growth rate as 600, 800 and 1000 μm/h. Finally, the density of coils was also rearranged to minimize the temperature variation in the SiC powder. All the results were analyzed and discussed.

  16. Optimization of the lattice function in the planar undulator applied for terahertz FEL oscillators

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Qin, Bin; Yang, Jun; Liu, Xia-Ling; Tan, Ping; Hu, Tong-Ning

    2014-03-01

    Since the beta function of the electron beam within the undulator has a great influence on the power gain of the free electron laser (FEL), optimization of the undulator lattice becomes important. In this paper, the transfer matrix of the planar undulator is obtained from differential equations of the electron motion. Based on this, the lattice function of the planar undulator in a terahertz FEL oscillator proposed by Huazhong University of Science and Technology (HUST-FEL) is optimized and the expressions of the average beta function are derived. The accuracy of the optimization result was confirmed well by the numerical method. The application range of this analytical method is given as well. At last, the emittance growth in the horizontal direction due to the attenuation of the magnetic field is discussed.

  17. Analysis of modern optimal control theory applied to plasma position and current control in TFTR

    SciTech Connect

    Firestone, M.A.

    1981-09-01

    The strong compression TFTR discharge has been segmented into regions where linear dynamics can approximate the plasma's interaction with the OH and EF power supply systems. The dynamic equations for these regions are utilized within the linear optimal control theory framework to provide active feedback gains to control the plasma position and current. Methods are developed to analyze and quantitatively evaluate the quality of control in a nonlinear, more realistic simulation. Tests are made of optimal control theory's assumptions and requirements, and the feasibility of this method for TFTR is assessed.

  18. A procedure for specimen optimization applied to material testing in plasticity with the virtual fields method

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Badaloni, Michele; Lava, Pascal; Debruyne, Dimitri; Pierron, Fabrice

    2016-10-01

    The paper presents a numerical procedure to design an optimal geometry for specimens that will be used to identify the hardening behaviour of sheet metals with the virtual fields method (VFM). The procedure relies on a test simulator able to generate synthetic images similar to the ones obtained during an actual test. Digital image correlation (DIC) was used to achieve the strain field, then the constitutive parameters are identified with the VFM and compared with the reference ones. A parametric study was conducted on different types of notched specimens and an optimal configuration was identified eventually.

  19. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of

  20. Using genomic prediction to characterize environments and optimize prediction accuracy in applied breeding data

    USDA-ARS?s Scientific Manuscript database

    Simulation and empirical studies of genomic selection (GS) show accuracies sufficient to generate rapid annual genetic gains. It also shifts the focus from the evaluation of lines to the evaluation of alleles. Consequently, new methods should be developed to optimize the use of large historic multi-...

  1. A correlation consistency based multivariate alarm thresholds optimization approach.

    PubMed

    Gao, Huihui; Liu, Feifei; Zhu, Qunxiong

    2016-11-01

    Different alarm thresholds could generate different alarm data, resulting in different correlations. A new multivariate alarm thresholds optimization methodology based on the correlation consistency between process data and alarm data is proposed in this paper. The interpretative structural modeling is adopted to select the key variables. For the key variables, the correlation coefficients of process data are calculated by the Pearson correlation analysis, while the correlation coefficients of alarm data are calculated by kernel density estimation. To ensure the correlation consistency, the objective function is established as the sum of the absolute differences between these two types of correlations. The optimal thresholds are obtained using particle swarm optimization algorithm. Case study of Tennessee Eastman process is given to demonstrate the effectiveness of proposed method.

  2. Computational approach to quantum encoder design for purity optimization

    SciTech Connect

    Yamamoto, Naoki; Fazel, Maryam

    2007-07-15

    In this paper, we address the problem of designing a quantum encoder that maximizes the minimum output purity of a given decohering channel, where the minimum is taken over all possible pure inputs. This problem is cast as a max-min optimization problem with a rank constraint on an appropriately defined matrix variable. The problem is computationally very hard because it is nonconvex with respect to both the objective function (output purity) and the rank constraint. Despite this difficulty, we provide a tractable computational algorithm that produces the exact optimal solution for codespace of dimension 2. Moreover, this algorithm is easily extended to cover the general class of codespaces, in which case the solution is suboptimal in the sense that the suboptimized output purity serves as a lower bound of the exact optimal purity. The algorithm consists of a sequence of semidefinite programmings and can be performed easily. Two typical quantum error channels are investigated to illustrate the effectiveness of our method.

  3. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  4. Hybrid Quantum-Classical Approach to Quantum Optimal Control

    NASA Astrophysics Data System (ADS)

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-01

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  5. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    NASA Astrophysics Data System (ADS)

    Nakae, Nobuo; Ozawa, Takayuki; Ohta, Hirokazu; Ogata, Takanari; Sekimoto, Hiroshi

    2014-03-01

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria.

  6. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L(-1) for cadaverine or 497 μg L(-1) for histamine in solvent and 0.07 mg kg(-1) and 14.81 mg kg(-1) in fish (probability of false positive equal to 0.05), respectively.

  7. An iterative approach to optimize change classification in SAR time series data

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2016-10-01

    The detection of changes using remote sensing imagery has become a broad field of research with many approaches for many different applications. Besides the simple detection of changes between at least two images acquired at different times, analyses which aim on the change type or category are at least equally important. In this study, an approach for a semi-automatic classification of change segments is presented. A sparse dataset is considered to ensure the fast and simple applicability for practical issues. The dataset is given by 15 high resolution (HR) TerraSAR-X (TSX) amplitude images acquired over a time period of one year (11/2013 to 11/2014). The scenery contains the airport of Stuttgart (GER) and its surroundings, including urban, rural, and suburban areas. Time series imagery offers the advantage of analyzing the change frequency of selected areas. In this study, the focus is set on the analysis of small-sized high frequently changing regions like parking areas, construction sites and collecting points consisting of high activity (HA) change objects. For each HA change object, suitable features are extracted and a k-means clustering is applied as the categorization step. Resulting clusters are finally compared to a previously introduced knowledge-based class catalogue, which is modified until an optimal class description results. In other words, the subjective understanding of the scenery semantics is optimized by the data given reality. Doing so, an even sparsely dataset containing only amplitude imagery can be evaluated without requiring comprehensive training datasets. Falsely defined classes might be rejected. Furthermore, classes which were defined too coarsely might be divided into sub-classes. Consequently, classes which were initially defined too narrowly might be merged. An optimal classification results when the combination of previously defined key indicators (e.g., number of clusters per class) reaches an optimum.

  8. On the combination of c- and D-optimal designs: General approaches and applications in dose-response studies.

    PubMed

    Holland-Letz, Tim

    2017-03-01

    Dose-response modeling in areas such as toxicology is often conducted using a parametric approach. While estimation of parameters is usually one of the goals, often the main aim of the study is the estimation of quantities derived from the parameters, such as the ED50 dose. From the view of statistical optimal design theory such an objective corresponds to a c-optimal design criterion. Unfortunately, c-optimal designs often create practical problems, and furthermore commonly do not allow actual estimation of the parameters. It is therefore useful to consider alternative designs which show good c-performance, while still being applicable in practice and allowing reasonably good general parameter estimation. In effect, using optimal design terminology this means that a reasonable performance regarding the D-criterion is expected as well. In this article, we propose several approaches to the task of combining c- and D-efficient designs, such as using mixed information functions or setting minimum requirements regarding either c- or D-efficiency, and show how to algorithmically determine optimal designs in each case. We apply all approaches to a standard situation from toxicology, and obtain a much better balance between c- and D-performance. Next, we investigate how to adapt the designs to different parameter values. Finally, we show that the methodology used here is not just limited to the combination of c- and D-designs, but can also be used to handle more general constraint situations such as limits on the cost of an experiment.

  9. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  10. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    SciTech Connect

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-08-15

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p<0.02) and achieved AUC=0.85{+-}0.01. The DF-P surpassed the other classifiers in terms of pAUC (p<0.01) and reached pAUC=0.38{+-}0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p<0.04) and achieved AUC=0.94{+-}0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57{+-}0.07 to 0.67{+-}0.05, p>0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

  11. Electromagnetic integral equation approach based on contraction operator and solution optimization in Krylov subspace

    NASA Astrophysics Data System (ADS)

    Singer, B. Sh.

    2008-12-01

    The paper presents a new code for modelling electromagnetic fields in complicated 3-D environments and provides examples of the code application. The code is based on an integral equation (IE) for the scattered electromagnetic field, presented in the form used by the Modified Iterative Dissipative Method (MIDM). This IE possesses contraction properties that allow it to be solved iteratively. As a result, for an arbitrary earth model and any source of the electromagnetic field, the sequence of approximations converges to the solution at any frequency. The system of linear equations that represents a finite-dimensional counterpart of the continuous IE is derived using a projection definition of the system matrix. According to this definition, the matrix is calculated by integrating the Green's function over the `source' and `receiver' cells of the numerical grid. Such a system preserves contraction properties of the continuous equation and can be solved using the same iterative technique. The condition number of the system matrix and, therefore, the convergence rate depends only on the physical properties of the model under consideration. In particular, these parameters remain independent of the numerical grid used for numerical simulation. Applied to the system of linear equations, the iterative perturbation approach generates a sequence of approximations, converging to the solution. The number of iterations is significantly reduced by finding the best possible approximant inside the Krylov subspace, which spans either all accumulated iterates or, if it is necessary to save the memory, only a limited number of the latest iterates. Optimization significantly reduces the number of iterates and weakens its dependence on the lateral contrast of the model. Unlike more traditional conjugate gradient approaches, the iterations are terminated when the approximate solution reaches the requested relative accuracy. The number of the required iterates, which for simple

  12. Contributions to the study of optimal biphasic pulse shapes for functional electric stimulation: an analytical approach using the excitation functional.

    PubMed

    Suárez-Antola, Roberto

    2007-01-01

    An analytical approach to threshold problems in functional electric stimulation and pacing is proposed, framed in the concept of excitation functional. This functional can be applied to nerve, muscle and myocardium stimulation by external electrodes. An optimal shape for a biphasic pulse is found, using the criteria of minimum energy dissipated in biological tissues and total charge compensation between the excitatory cathodic and the compensatory anodic phases. The method can be further developed and applied to other threshold problems in functional electric stimulation and pacing.

  13. An Optimal Design Approach to Criterion-Referenced Computerized Testing

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2003-01-01

    A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…

  14. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  15. An Optimal Foraging Approach to Information Seeking and Use.

    ERIC Educational Resources Information Center

    Sandstrom, Pamela Effrein

    1994-01-01

    Explores optimal foraging theory, derived from evolutionary ecology, for its potential to clarify and operationalize studies of scholarly communication. Metaphorical parallels between subsistence foragers and scholarly information seekers are drawn. Hypotheses to test the models are recommended. The place of ethnographic and bibliometric…

  16. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  17. Bayesian approach for counting experiment statistics applied to a neutrino point source analysis

    NASA Astrophysics Data System (ADS)

    Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.

    2013-12-01

    In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.

  18. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  19. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  20. Behavioral Language Interventions for Children with Autism: Comparing Applied Verbal Behavior and Naturalistic Teaching Approaches

    PubMed Central

    LeBlanc, Linda A; Esch, John; Sidener, Tina M; Firth, Amanda M

    2006-01-01

    Several important behavioral intervention models have been developed for teaching language to children with autism and two are compared in this paper. Professionals adhering to Skinner's conceptualization of language refer to their curriculum and intervention programming as applied verbal behavior (AVB). Those primarily focused on developing and using strategies embedded in natural settings that promote generalization refer to their interventions as naturalistic teaching approaches (NTAs). The purpose of this paper is to describe each approach and discuss similarities and differences in terms of relevant dimensions of stimulus control. The discussion includes potential barriers to translation of terminology between the two approaches that we feel can be overcome to allow better communication and collaboration between the two communities. Common naturalistic teaching procedures are described and a Skinnerian conceptualization of these learning events is provided. PMID:22477343

  1. Assessing switchability for biosimilar products: modelling approaches applied to children's growth.

    PubMed

    Belleli, Rossella; Fisch, Roland; Renard, Didier; Woehling, Heike; Gsteiger, Sandro

    2015-01-01

    The present paper describes two statistical modelling approaches that have been developed to demonstrate switchability from the original recombinant human growth hormone (rhGH) formulation (Genotropin(®) ) to a biosimilar product (Omnitrope(®) ) in children suffering from growth hormone deficiency. Demonstrating switchability between rhGH products is challenging because the process of growth varies with the age of the child and across children. The first modelling approach aims at predicting individual height measured at several time-points after switching to the biosimilar. The second modelling approach provides an estimate of the deviation from the overall growth rate after switching to the biosimilar, which can be regarded as an estimate of switchability. The results after applying these approaches to data from a randomized clinical trial are presented. The accuracy and precision of the predictions made using the first approach and the small deviation from switchability estimated with the second approach provide sufficient evidence to conclude that switching from Genotropin(®) to Omnitrope(®) has a very small effect on growth, which is neither statistically significant nor clinically relevant.

  2. Emetine optimally facilitates nascent chain puromycylation and potentiates the ribopuromycylation method (RPM) applied to inert cells.

    PubMed

    David, Alexandre; Bennink, Jack R; Yewdell, Jonathan W

    2013-03-01

    We previously described the ribopuromyclation method (RPM) to visualize and quantitate translating ribosomes in fixed and permeabilized cells by standard immunofluorescence. RPM is based on puromycylation of nascent chains bound to translating ribosomes followed by detection of puromycylated nascent chains with a puromycin-specific mAb. We now demonstrate that emetine optimally enhances nascent chain puromycylation, and describe a modified RPM protocol for identifying ribosome-bound nascent chains in metabolically inert permeabilized cells.

  3. Optimal Runge-Kutta schemes for discontinuous Galerkin space discretizations applied to wave propagation problems

    NASA Astrophysics Data System (ADS)

    Toulorge, T.; Desmet, W.

    2012-02-01

    We study the performance of methods of lines combining discontinuous Galerkin spatial discretizations and explicit Runge-Kutta time integrators, with the aim of deriving optimal Runge-Kutta schemes for wave propagation applications. We review relevant Runge-Kutta methods from literature, and consider schemes of order q from 3 to 4, and number of stages up to q + 4, for optimization. From a user point of view, the problem of the computational efficiency involves the choice of the best combination of mesh and numerical method; two scenarios are defined. In the first one, the element size is totally free, and a 8-stage, fourth-order Runge-Kutta scheme is found to minimize a cost measure depending on both accuracy and stability. In the second one, the elements are assumed to be constrained to such a small size by geometrical features of the computational domain, that accuracy is disregarded. We then derive one 7-stage, third-order scheme and one 8-stage, fourth-order scheme that maximize the stability limit. The performance of the three new schemes is thoroughly analyzed, and the benefits are illustrated with two examples. For each of these Runge-Kutta methods, we provide the coefficients for a 2 N-storage implementation, along with the information needed by the user to employ them optimally.

  4. Successful aging at work: an applied study of selection, optimization, and compensation through impression management.

    PubMed

    Abraham, J D; Hansson, R O

    1995-03-01

    Although many abilities basic to human performance appear to decrease with age, research has shown that job performance does not generally show comparable declines. Baltes and Baltes (1990) have proposed a model of successful aging involving Selection, Optimization, and Compensation (SOC), that may help explain how individuals maintain important competencies despite age-related losses. In the present study, involving a total of 224 working adults ranging in age from 40 to 69 years, occupational measures of Selection, Optimization, and Compensation through impression management (Compensation-IM) were developed. The three measures were factorially distinct and reliable (Cronbach's alpha > .80). Moderated regression analyses indicated that: (1) the relationship between Selection and self-reported ability/performance maintenance increased with age (p < or = .05); and (2) the relationship between both Optimization and Compensation-IM and goal attainment (i.e., importance-weighted ability/performance maintenance) increased with age (p < or = .05). Results suggest that the SOC model of successful aging may be useful in explaining how older workers can maintain important job competencies. Correlational evidence also suggests, however, that characteristics of the job, workplace, and individual may mediate the initiation and effectiveness of SOC behaviors.

  5. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-08-25

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  7. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  8. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  9. Optimized electron beam writing strategy for fabricating computer-generated holograms based on an effective medium approach.

    PubMed

    Freese, Wiebke; Kämpfe, Thomas; Rockstroh, Werner; Kley, Ernst-Bernhard; Tünnermann, Andreas

    2011-04-25

    Recent research revealed that using the effective medium approach to generate arbitrary multi-phase level computer-generated holograms is a promising alternative to the conventional multi-height level approach. Although this method reduces the fabrication effort using one-step binary lithography, the subwavelength patterning process remains a huge challenge, particularly for large-scale applications. To reduce the writing time on variable shaped electron beam writing systems, an optimized strategy based on an appropriate reshaping of the binary subwavelength structures is illustrated. This strategy was applied to fabricate a three-phase level CGH in the visible range, showing promising experimental results.

  10. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is

  11. An Implementation of a Mathematical Programming Approach to Optimal Enrollments. AIR 2001 Annual Forum Paper.

    ERIC Educational Resources Information Center

    DePaolo, Concetta A.

    This paper explores the application of a mathematical optimization model to the problem of optimal enrollments. The general model, which can be applied to any institution, seeks to enroll the "best" class of students (as defined by the institution) subject to constraints imposed on the institution (e.g., capacity, quality). Topics…

  12. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  13. Optimized setup for integral refractive index direct determination applying digital holographic microscopy by reflection and transmission

    NASA Astrophysics Data System (ADS)

    Frómeta, M.; Moreno, G.; Ricardo, J.; Arias, Y.; Muramatsu, M.; Gomes, L. F.; Palácios, G.; Palácios, F.; Velázquez, H.; Valin, J. L.; Ramirez Q, L.

    2017-03-01

    In this paper the integral refractive index of a microscopic sample was directly measured by applying Digital Holographic Microscopy (DHM) capturing transmission and reflection holograms simultaneously, of the same sample's region, using Mach-Zehnder and Michelson micro interferometers for transmission and reflection holograms capture and modeling the 3D sample in a medium of known refractive index nm. The system was calibrated using standard polystyrene sphere immersed in water with known diameter and refractive index, and the method was applied for erythrocyte integral refractive index determination. The results are in accordance with predicted, the measurements error of the order of ± 0.005 in absolute values.

  14. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  15. Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1987-01-01

    A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.

  16. A thermodynamic approach to the affinity optimization of drug candidates.

    PubMed

    Freire, Ernesto

    2009-11-01

    High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.

  17. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  18. A free boundary approach to shape optimization problems.

    PubMed

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint.

  19. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  20. Optimal perfusion during cardiopulmonary bypass: an evidence-based approach.

    PubMed

    Murphy, Glenn S; Hessel, Eugene A; Groom, Robert C

    2009-05-01

    In this review, we summarize the best available evidence to guide the conduct of adult cardiopulmonary bypass (CPB) to achieve "optimal" perfusion. At the present time, there is considerable controversy relating to appropriate management of physiologic variables during CPB. Low-risk patients tolerate mean arterial blood pressures of 50-60 mm Hg without apparent complications, although limited data suggest that higher-risk patients may benefit from mean arterial blood pressures >70 mm Hg. The optimal hematocrit on CPB has not been defined, with large data-based investigations demonstrating that both severe hemodilution and transfusion of packed red blood cells increase the risk of adverse postoperative outcomes. Oxygen delivery is determined by the pump flow rate and the arterial oxygen content and organ injury may be prevented during more severe hemodilutional anemia by increasing pump flow rates. Furthermore, the optimal temperature during CPB likely varies with physiologic goals, and recent data suggest that aggressive rewarming practices may contribute to neurologic injury. The design of components of the CPB circuit may also influence tissue perfusion and outcomes. Although there are theoretical advantages to centrifugal blood pumps over roller pumps, it has been difficult to demonstrate that the use of centrifugal pumps improves clinical outcomes. Heparin coating of the CPB circuit may attenuate inflammatory and coagulation pathways, but has not been clearly demonstrated to reduce major morbidity and mortality. Similarly, no distinct clinical benefits have been observed when open venous reservoirs have been compared to closed systems. In conclusion, there are currently limited data upon which to confidently make strong recommendations regarding how to conduct optimal CPB. There is a critical need for randomized trials assessing clinically significant outcomes, particularly in high-risk patients.

  1. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  2. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  3. Coordinated Target Tracking via a Hybrid Optimization Approach

    PubMed Central

    Wang, Yin; Cao, Yan

    2017-01-01

    Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV) in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects’ motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO). The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions. PMID:28264425

  4. Coordinated Target Tracking via a Hybrid Optimization Approach.

    PubMed

    Wang, Yin; Cao, Yan

    2017-02-27

    Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV) in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects' motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO). The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions.

  5. A fundamental experimental approach for optimal design of speed bumps.

    PubMed

    Lav, A Hakan; Bilgin, Ertugrul; Lav, A Hilmi

    2017-06-02

    Speed bumps and humps are utilized as means of calming traffic and controlling vehicular speed. Needless to say, bumps and humps of large dimensions in length and width force drivers to significantly reduce their driving speeds so as to avoid significant vehicle vertical acceleration. It is thus that this experimental study was conducted with the aim of determining a speed bump design that performs optimally when leading drivers to reduce the speed of their vehicles to safe levels. The first step of the investigation starts off by considering the following question: "What is the optimal design of a speed bump that will - at the same time - reduce the velocity of an incoming vehicle significantly and to a speed that resulting vertical acceleration does not jeopardize road safety? The experiment has been designed to study the dependent variables and collect data in order to propose an optimal design for a speed bump. To achieve this, a scaled model of 1:6 to real life was created to simulate the interaction between a car wheel and a speed bump. During the course of the experiment, a wheel was accelerated down an inclined plane onto a horizontal plane of motion where it was allowed to collide with a speed bump. The speed of the wheel and the vertical acceleration at the speed bump were captured by means of a Vernier Motion Detector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  7. Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories

    NASA Astrophysics Data System (ADS)

    Way, David Wesley

    2001-10-01

    Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on

  8. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.

  9. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    NASA Astrophysics Data System (ADS)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  10. Therapy for Duchenne muscular dystrophy: renewed optimism from genetic approaches.

    PubMed

    Fairclough, Rebecca J; Wood, Matthew J; Davies, Kay E

    2013-06-01

    Duchenne muscular dystrophy (DMD) is a devastating progressive disease for which there is currently no effective treatment except palliative therapy. There are several promising genetic approaches, including viral delivery of the missing dystrophin gene, read-through of translation stop codons, exon skipping to restore the reading frame and increased expression of the compensatory utrophin gene. The lessons learned from these approaches will be applicable to many other disorders.

  11. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    PubMed

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  12. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    PubMed Central

    Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz

    2017-01-01

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857

  13. Quality by design approach for optimizing the formulation and physical properties of extemporaneously prepared orodispersible films.

    PubMed

    Visser, J Carolina; Dohmen, Willem M C; Hinrichs, Wouter L J; Breitkreutz, Jörg; Frijlink, Henderik W; Woerdenbag, Herman J

    2015-05-15

    The quality by design (QbD) approach was applied for optimizing the formulation of extemporaneously prepared orodispersible films (ODFs) using Design-Expert® Software. The starting formulation was based on earlier experiments and contained the film forming agents hypromellose and carbomer 974P and the plasticizer glycerol (Visser et al., 2015). Trometamol and disodium EDTA were added to stabilize the solution. To optimize this formulation a quality target product profile was established in which critical quality attributes (CQAs) such as mechanical properties and disintegration time were defined and quantified. As critical process parameters (CPP) that were evaluated for their effect on the CQAs the percentage of hypromellose and the percentage of glycerol as well as the drying time were chosen. Response surface methodology (RMS) was used to evaluate the effects of the CPPs on the CQAs of the final product. The main factor affecting tensile strength and Young's modulus was the percentage of glycerol. Elongation at break was mainly influenced by the drying temperature. Disintegration time was found to be sensitive to the percentage of hypromellose. From the results a design space could be created. As long as the formulation and process variables remain within this design space, a product is obtained with desired characteristics and that meets all set quality requirements. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Optimization on machine learning based approaches for sentiment analysis on HPV vaccines related tweets.

    PubMed

    Du, Jingcheng; Xu, Jun; Song, Hsingyi; Liu, Xiangyu; Tao, Cui

    2017-03-03

    Analysing public opinions on HPV vaccines on social media using machine learning based approaches will help us understand the reasons behind the low vaccine coverage and come up with corresponding strategies to improve vaccine uptake. To propose a machine learning system that is able to extract comprehensive public sentiment on HPV vaccines on Twitter with satisfying performance. We collected and manually annotated 6,000 HPV vaccines related tweets as a gold standard. SVM model was chosen and a hierarchical classification method was proposed and evaluated. Additional feature sets evaluation and model parameters optimization was done to maximize the machine learning model performance. A hierarchical classification scheme that contains 10 categories was built to access public opinions toward HPV vaccines comprehensively. A 6,000 annotated tweets gold corpus with Kappa annotation agreement at 0.851 was created and made public available. The hierarchical classification model with optimized feature sets and model parameters has increased the micro-averaging and macro-averaging F score from 0.6732 and 0.3967 to 0.7442 and 0.5883 respectively, compared with baseline model. Our work provides a systematical way to improve the machine learning model performance on the highly unbalanced HPV vaccines related tweets corpus. Our system can be further applied on a large tweets corpus to extract large-scale public opinion towards HPV vaccines.

  15. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.

    PubMed

    Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz

    2017-05-19

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.

  16. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  17. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  18. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  19. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  20. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  1. Experiences in applying optimization techniques to configurations for the Control Of Flexible Structures (COFS) Program

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1988-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general-purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used. The use of optimization in this activity allows investigation of numerous options (such as number of bays, material, minimum diagonal wall thicknesses) in a relatively short time. The procedure provides data for judgments on the effects of different options on the design.

  2. Multiple response optimization applied to the development of a capillary electrophoretic method for pharmaceutical analysis.

    PubMed

    Candioti, Luciana Vera; Robles, Juan C; Mantovani, Víctor E; Goicoechea, Héctor C

    2006-03-15

    Multiple response simultaneous optimization by using the desirability function was used for the development of a capillary electrophoresis method for the simultaneous determination of four active ingredients in pharmaceutical preparations: vitamins B(6) and B(12), dexamethasone and lidocaine hydrochloride. Five responses were simultaneously optimized: the three resolutions, the analysis time and the capillary current. This latter response was taken into account in order to improve the quality of the separations. The separation was carried out by using capillary zone electrophoresis (CZE) with a silica capillary and UV detection (240 nm). The optimum conditions were: 57.0 mmol l(-1) sodium phosphate buffer solution, pH 7.0 and voltage=17.2 kV. Good results concerning precision (CV lower than 2%), accuracy (recoveries ranged between 98.5 and 102.6%) and selectivity were obtained in the concentration range studied for the four compounds. These results are comparable to those provided by the reference high performance liquid chromatography (HPLC) technique.

  3. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  4. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform).

  5. [Optimization approach to inverse problems in near-infrared optical tomography].

    PubMed

    Li, Weitao; Wang, Huinan; Qian, Zhiyu

    2008-04-01

    In this paper, we introduce an optimization approach to the inverse model of near-infrared optical tomography (NIR OT), which can reconstruct the optical properties, namely the absorption and scattering coefficients of thick tissue such as brain and breast tissues. A modeling and simulation tool, named Femlab and based on finite element methods, has been tested wherein the forward models are based on the diffusion equation. Then the inverse model is soved; this is regarded as an optimization approach, including the tests on difference between the measured data and the predicted data, and the optimization methods of optical properties. The algorithms used for optimization are multi-species Genetic Algorithms based on multi-encoding. At last, the whole strategy for the Femlab and optimization approach is given. The strategy is proved to be sufficient by the simulation results.

  6. A multicriteria decision making approach applied to improving maintenance policies in healthcare organizations.

    PubMed

    Carnero, María Carmen; Gómez, Andrés

    2016-04-23

    Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a

  7. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  8. A New Approach to Site Demand-Based Level Inventory Optimization

    DTIC Science & Technology

    2016-06-01

    TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION by Tacettin Ersoz June 2016 Thesis Advisor: Javier Salmeron Second Reader: Emily...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW APPROACH TO SITE DEMAND-BASED LEVEL INVENTORY OPTIMIZATION 5. FUNDING NUMBERS 6...number of orders. The Site Demand-Based Level Inventory Optimization Model (SIOM) is a mixed-integer, linear program developed at the Naval

  9. A numerical optimization approach to generate smoothing spherical splines

    NASA Astrophysics Data System (ADS)

    Machado, L.; Monteiro, M. Teresa T.

    2017-01-01

    Approximating data in curved spaces is a common procedure that is extremely required by modern applications arising, for instance, in aerospace and robotics industries. Here, we are particularly interested in finding smoothing cubic splines that best fit given data in the Euclidean sphere. To achieve this aim, a least squares optimization problem based on the minimization of a certain cost functional is formulated. To solve the problem a numerical algorithm is implemented using several routines from MATLAB toolboxes. The proposed algorithm is shown to be easy to implement, very accurate and precise for spherical data chosen randomly.

  10. Individualized prophylaxis for optimizing hemophilia care: can we apply this to both developed and developing nations?

    PubMed

    Poon, Man-Chiu; Lee, Adrienne

    2016-01-01

    Prophylaxis is considered optimal care for hemophilia patients to prevent bleeding and to preserve joint function thereby improving quality of life (QoL). The evidence for prophylaxis is irrefutable and is the standard of care in developed nations. Prophylaxis can be further individualized to improve outcomes and cost effectiveness. Individualization is best accomplished taking into account the bleeding phenotype, physical activity/lifestyle, joint status, and pharmacokinetic handling of specific clotting factor concentrates, all of which vary among individuals. Patient acceptance should also be considered. Assessment tools (e.g. joint status imaging and function studies/scores, QoL) for determining and monitoring risk factors and outcome, as well as population PK profiling have been developed to assist the individualization process. The determinants of optimal prophylaxis include (1) factor dose/dosing frequency, hence, cost/affordability (2) bleeding triggers (physical activity/lifestyle, chronic arthropathy and synovitis) and (3) bleeding rates. Altering one determinant results in adjustment of the other two. Thus, the trough level to protect from spontaneous bleeding can be increased in patients who have greater bleeding risks; and prophylaxis to achieve zero joint bleeds is achievable through optimal individualization. Prophylaxis in economically constrained nations is limited by the ill-affordability of clotting factor concentrates. However, at least 5 studies on children and adults from Thailand, China and India have shown superiority of low dose (~5-10 IU kg(-1) 2-3× per week) prophylaxis over episodic treatment in terms of bleed reduction, and quality of life, with improved physical activity, independent functioning, school attendance and community participation. In these nations, the prophylaxis goals should be for improved QoL rather than "zero bleeds" and perfect joints. Prophylaxis can still be individualized to affordability. Higher protective

  11. Learning About Dying and Living: An Applied Approach to End-of-Life Communication.

    PubMed

    Pagano, Michael P

    2016-08-01

    The purpose of this article is to expand on prior research in end-of-life communication and death and dying communication apprehension, by developing a unique course that utilizes a hospice setting and an applied, service-learning approach. Therefore, this essay describes and discusses both students' and my experiences over a 7-year period from 2008 through 2014. The courses taught during this time frame provided an opportunity to analyze students' responses, experiences, and discoveries across semesters/years and cocultures. This unique, 3-credit, 14-week, service-learning, end-of-life communication course was developed to provide an opportunity for students to learn the theories related to this field of study and to apply that knowledge through volunteer experiences via interactions with dying patients and their families. The 7 years of author's notes, plus the 91 students' electronically submitted three reflection essays each (273 total documents) across four courses/years, served as the data for this study. According to the students, verbally in class discussions and in numerous writing assignments, this course helped lower their death and dying communication apprehension and increased their willingness to interact with hospice patients and their families. Furthermore, the students' final research papers clearly demonstrated how utilizing a service-learning approach allowed them to apply classroom learnings and interactions with dying patients and their families at the hospice, to their analyses of end-of-life communication theories and behaviors. The results of these classes suggest that other, difficult topic courses (e.g., domestic violence, addiction, etc.) might benefit from a similar pedagogical approach.

  12. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-10-01

    Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  13. Towards Optimization of ACRT Schedules Applied to the Gradient Freeze Growth of Cadmium Zinc Telluride

    DOE PAGES

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-10-03

    Historically, the melt growth of II-VI crystals has benefitted by the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The “flow maximizing” ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. Furthermore, these counterintuitivemore » results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.« less

  14. Optimal numerical parameterization of discontinuous Galerkin method applied to wave propagation problems

    SciTech Connect

    Chevaugeon, Nicolas . E-mail: chevaugeon@gce.ucl.ac.be; Hillewaert, Koen; Gallez, Xavier; Ploumhans, Paul; Remacle, Jean-Francois . E-mail: remacle@gce.ucl.ac.be

    2007-04-10

    This paper deals with the high-order discontinuous Galerkin (DG) method for solving wave propagation problems. First, we develop a one-dimensional DG scheme and numerically compute dissipation and dispersion errors for various polynomial orders. An optimal combination of time stepping scheme together with the high-order DG spatial scheme is presented. It is shown that using a time stepping scheme with the same formal accuracy as the DG scheme is too expensive for the range of wave numbers that is relevant for practical applications. An efficient implementation of a high-order DG method in three dimensions is presented. Using 1D convergence results, we further show how to adequately choose elementary polynomial orders in order to equi-distribute a priori the discretization error. We also show a straightforward manner to allow variable polynomial orders in a DG scheme. We finally propose some numerical examples in the field of aero-acoustics.

  15. Norm-Optimal ILC Applied to a High-Speed Rack Feeder

    NASA Astrophysics Data System (ADS)

    Schindele, Dominik; Aschemann, Harald; Ritzke, Jöran

    2010-09-01

    Rack feeders as automated conveying systems for high bay rackings are of high practical importance. To shorten the transport times by using trajectories with increased kinematic values accompanying control measures for a reduction of the excited structural vibrations are necessary. In this contribution, the model-based design of a norm-optimal iterative learning control structure is presented. The rack feeder is modelled as an elastic multibody system. For the mathematical description of the bending deflections a Ritz ansatz is introduced. The tracking control design is performed separately for both axes using decentralised state space representations. Both the achievable performance and the resulting tracking accuracy of the proposed control concept are shown by measurement results from the experimental set-up.

  16. Optimization of the Operation of Green Buildings applying the Facility Management

    NASA Astrophysics Data System (ADS)

    Somorová, Viera

    2014-06-01

    Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions

  17. Optimization in multidimensional gas chromatography applying quantitative analysis via a stable isotope dilution assay.

    PubMed

    Schmarr, Hans-Georg; Slabizki, Petra; Legrum, Charlotte

    2013-08-01

    Trace level analyses in complex matrices benefit from heart-cut multidimensional gas chromatographic (MDGC) separations and quantification via a stable isotope dilution assay. Minimization of the potential transfer of co-eluting matrix compounds from the first dimension ((1)D) separation into the second dimension separation requests narrow cut-windows. Knowledge about the nature of the isotope effect in the separation of labeled and unlabeled compounds allows choosing conditions resulting in at best a co-elution situation in the (1)D separation. Since the isotope effect strongly depends on the interactions of the analytes with the stationary phase, an appropriate separation column polarity is mandatory for an isotopic co-elution. With 3-alkyl-2-methoxypyrazines and an ionic liquid stationary phase as an example, optimization of the MDGC method is demonstrated and critical aspects of narrow cut-window definition are discussed.

  18. Excited-State Geometry Optimization with the Density Matrix Renormalization Group, as Applied to Polyenes.

    PubMed

    Hu, Weifeng; Chan, Garnet Kin-Lic

    2015-07-14

    We describe and extend the formalism of state-specific analytic density matrix renormalization group (DMRG) energy gradients, first used by Liu et al. [J. Chem. Theor. Comput. 2013, 9, 4462]. We introduce a DMRG wave function maximum overlap following technique to facilitate state-specific DMRG excited-state optimization. Using DMRG configuration interaction (DMRG-CI) gradients, we relax the low-lying singlet states of a series of trans-polyenes up to C20H22. Using the relaxed excited-state geometries, as well as correlation functions, we elucidate the exciton, soliton, and bimagnon ("single-fission") character of the excited states, and find evidence for a planar conical intersection.

  19. Applying Business Process Re-engineering Patterns to optimize WS-BPEL Workflows

    NASA Astrophysics Data System (ADS)

    Buys, Jonas; de Florio, Vincenzo; Blondia, Chris

    With the advent of XML-based SOA, WS-BPEL shortly turned out to become a widely accepted standard for modeling business processes. Though SOA is said to embrace the principle of business agility, BPEL process definitions are still manually crafted into their final executable version. While SOA has proven to be a giant leap forward in building flexible IT systems, this static BPEL workflow model is somewhat paradoxical to the need for real business agility and should be enhanced to better sustain continual process evolution. In this paper, we point out the potential of adding business intelligence with respect to business process re-engineering patterns to the system to allow for automatic business process optimization. Furthermore, we point out that BPR macro-rules could be implemented leveraging micro-techniques from computer science. We present some practical examples that illustrate the benefit of such adaptive process models and our preliminary findings.

  20. Optimizing physicians' instruction of PACS through e-learning: cognitive load theory applied.

    PubMed

    Devolder, P; Pynoo, B; Voet, T; Adang, L; Vercruysse, J; Duyck, P

    2009-03-01

    This article outlines the strategy used by our hospital to maximize the knowledge transfer to referring physicians on using a picture archiving and communication system (PACS). We developed an e-learning platform underpinned by the cognitive load theory (CLT) so that in depth knowledge of PACS' abilities becomes attainable regardless of the user's prior experience with computers. The application of the techniques proposed by CLT optimizes the learning of the new actions necessary to obtain and manipulate radiological images. The application of cognitive load reducing techniques is explained with several examples. We discuss the need to safeguard the physicians' main mental processes to keep the patient's interests in focus. A holistic adoption of CLT techniques both in teaching and in configuration of information systems could be adopted to attain this goal. An overview of the advantages of this instruction method is given both on the individual and organizational level.

  1. On the local optimal solutions of metabolic regulatory networks using information guided genetic algorithm approach and clustering analysis.

    PubMed

    Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming

    2007-08-31

    Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.

  2. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-04

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective.

  3. A penalty approach for nonlinear optimization with discrete design variables

    NASA Technical Reports Server (NTRS)

    Shin, Dong K.; Gurdal, Zafer; Griffin, O. H., Jr.

    1989-01-01

    Introduced here is a simple approach to minimization problems with discrete design variables by modifying the penaly function approach of converting the constrained problems into sequential unconstrained minimization technique (SUMT) problems. It was discovered, during the course of the present work, that a similar idea was suggested by Marcal and Gellatly. However, no further work has been encountered. A brief description of the SUMT is presented. The form of the penalty function for the discrete-valued design variables and strategy used for the implementation of the procedure is discussed next. Finally, several design examples are used to demonstrate the procedure, and results are compared with the ones available in the literature.

  4. Open approach for machine diagnostics and process optimization

    NASA Astrophysics Data System (ADS)

    McLeod, C. Stuart; Thomas, David W.; West, Andrew A.; Armstrong, Neal A.

    1997-01-01

    Machine diagnostics and process optimization requires efficient techniques for the real time collection and dissemination of information to enterprise personnel. Open data presentations are required for the diverse software packages used by enterprise personnel, from process modeling and statistical process control to financial and Management Information Systems (MIS) packages. Current systems that enable rapid data collection tend to be vendor specific, point to point applications that are difficult and expensive to update, extend and modify. An open architecture is required that is capable of providing low cost real time collection and dissemination of information to end user applications. The development of an open architecture within the object oriented paradigm to solve a process optimization problem within a packaging organization is described in this paper. The architecture encompasses both the high level data dissemination and low level data storage and communications. A robust communications link between the sensors/intelligent nodes positioned on shop floor machines and the archive/dissemination medium is provided by a fieldbus network. The fieldbus communications link is configurable to allow the periodic sampling/monitoring shop floor data, and high performance collection of data regarding specific processes or events. The data transmission techniques utilized allow the high performance collection of data without disrupting object technology infrastructure. The common object request broker architecture is utilized to provide truly distributed systems for the myriad of applications used by enterprise personnel.

  5. Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.

    PubMed

    van Lent, Wineke A M; Goedbloed, N; van Harten, W H

    2009-03-01

    To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.

  6. A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach.

    PubMed

    Reiweger, Ingrid; Genswein, Manuel; Paal, Peter; Schweizer, Jürg

    2017-01-01

    Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps-and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1) calculating how deep one should probe in the first passage of a probe line depending on search area, and (2) determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident.

  7. A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach

    PubMed Central

    Paal, Peter; Schweizer, Jürg

    2017-01-01

    Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps–and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1) calculating how deep one should probe in the first passage of a probe line depending on search area, and (2) determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident. PMID:28467434

  8. Design and optimization of bilayered tablet of Hydrochlorothiazide using the Quality-by-Design approach

    PubMed Central

    Dholariya, Yatin N; Bansod, Yogesh B; Vora, Rahul M; Mittal, Sandeep S; Shirsat, Ajinath Eknath; Bhingare, Chandrashekhar L

    2014-01-01

    Aim: The aim of the present study is to develop an optimize bilayered tablet using Hydrochlorothiazide (HCTZ) as a model drug candidate using quality by design (QbD) approach. Introduction and Method: The bilayered tablet gives biphasic drug release through loading dose; prepared using croscarmellose sodium a superdisintegrant and maintenance dose using several viscosity grades of hydrophilic polymers. The fundamental principle of QbD is to demonstrate understanding and control of pharmaceutical processes so as to deliver high quality pharmaceutical products with wide opportunities for continuous improvement. Risk assessment was carried out and subsequently 22 factorial designs in duplicate was selected to carry out design of experimentation (DOE) for evaluating the interactions and effects of the design factors on critical quality attribute. The design space was obtained by applying DOE and multivariate analysis, so as to ensure desired disintegration time (DT) and drug release is achieved. Bilayered tablet were evaluated for hardness, thickness, friability, drug content uniformity and in vitro drug dissolution. Result: Optimized formulation obtained from the design space exhibits DT of around 70 s, while DR T95% (time required to release 95% of the drug) was about 720 min. Kinetic studies of formulations revealed that erosion is the predominant mechanism for drug release. Conclusion: From the obtained results; it was concluded that independent variables have a significant effect over the dependent responses, which can be deduced from half normal plots, pareto charts and surface response graphs. The predicted values matched well with the experimental values and the result demonstrates the feasibility of the design model in the development and optimization of HCTZ bilayered tablet. PMID:25006554

  9. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    PubMed

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.

  10. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  11. A Computational Approach for Near-Optimal Path Planning and Guidance for Systems with Nonholonomic Contraints

    DTIC Science & Technology

    2010-04-14

    novel methods for discretization based on Legendre-Gauss and Legendre-Gauss- Radau quadrature points. Using this approach, the finite-dimensional...Gauss- Radau quadrature points. Using this approach, the finite-dimensional approximation is kept low-dimensional, potentially enabling near real...Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method,” Computational Optimization and

  12. Predicting the optimal dopant concentration in gadolinium doped ceria: a kinetic lattice Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Dholabhai, Pratik P.; Anwar, Shahriar; Adams, James B.; Crozier, Peter A.; Sharma, Renu

    2012-01-01

    Gadolinium doped ceria (GDC) is a promising alternative electrolyte material for solid oxide fuel cells that offers the possibility of operation in the intermediate temperature range (773-1073 K). To determine the optimal dopant concentration in GDC, we have employed a systematic approach of applying a 3D kinetic lattice Monte Carlo (KLMC) model of vacancy diffusion in conjunction with previously calculated activation energies for vacancy migration in GDC as inputs. KLMC simulations were performed including the vacancy repelling effects in GDC. Increasing the dopant concentration increases the vacancy concentration, which increases the ionic conductivity. However, at higher concentrations, vacancy-vacancy repulsion impedes vacancy diffusion, and together with vacancy trapping by dopants decreases the ionic conductivity. The maximum ionic conductivity is predicted to occur at ≈20% to 25% mole fraction of Gd dopant. Placing Gd dopants in pairs, instead of randomly, was found to decrease the conductivity by ≈50%. Overall, the trends in ionic conductivity results obtained using the KLMC model developed in this work are in reasonable agreement with the available experimental data. This KLMC model can be applied to a variety of ceria-based electrolyte materials for predicting the optimum dopant concentration.

  13. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis.

    PubMed

    Yakubova, Galina; Kavetskiy, Aleksandr; Prior, Stephen A; Torbert, H Allen

    2017-10-01

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [minimal detectible level (MDL), sensitivity] for soil carbon measurement. The INS system model with best performance characteristics was determined based on MC simulation results. Measurements of MDL using an experimental prototype based on this model demonstrated good agreement with simulated data. This prototype will be used as the base engineering design for a new INS system. Copyright © 2017. Published by Elsevier Ltd.

  14. Modern Optimal Control Methods Applied in Active Control of a Tetrahedron.

    DTIC Science & Technology

    1980-12-01

    1 -4 44 1111 -tW, -- Sw- op VNI V 9 Z4 1 M , Nt m 211 W- 7 NZE W17 gn Y 1 ;z" N, A Y.7 4", r6 19...TOSFEB 1 0 1981 -~j 0"?TO EHD t APPLIED IN CTIVE 9ONTROL OFA TETRAHEDRON. THESIS AFIT/GA/A/8OP2"-Alan x /Janiszewski Approved for public release... 1 II. System Model......................5 General Configuration ................ Equations of Motion ................... 10 Modal

  15. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    NASA Astrophysics Data System (ADS)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  16. An optimal dynamic inversion-based neuro-adaptive approach for treatment of chronic myelogenous leukemia.

    PubMed

    Padhi, Radhakant; Kothari, Mangal

    2007-09-01

    Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

  17. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues

  18. Evaluation of Jumping and Creeping Regularization Approaches Applied to 3D Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Liu, M.; Ramachandran, K.

    2011-12-01

    are evaluated on a synthetic 3-D true model obtained from a large scale experiment. The evaluation is performed for jumping and creeping approaches for various levels of smoothing constraints, and initial models. The final models are compared against the true models to compute residual distance between the models. Horizontal and vertical roughness in the final models are computed and compared with the true model roughness. Correlation between the true and final models is computed to evaluate the similarities of spatial patterns in the models. The study is also used to show that average 1-D models derived from the final models are very close, indicating that this will be an optimal approach to construct 1-D starting models.

  19. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  20. Probabilistic Modeling Approach to Thermoelectric Systems Design Optimization

    SciTech Connect

    Karri, Naveen K.; Hendricks, Terry J.

    2007-06-25

    Recent studies on thermoelectric (TE) systems indicate that the existence of high figure of merit (ZT) materials alone is not sufficient for superior system performance and an integrated system level analysis is necessary to attain such performance. This is because there are numerous design parameters at various levels of the system that are randomly variable in nature that could affect the overall system performance. In this work the effect of stochasticity in design variables at various levels of a TE system has been studied and analyzed to attain optimal design solutions. Starting with stochasticity in one of the environmental variables, a progression was made towards studying the coupled effects of stochasticity in multiple variables at environmental and heat exchanger levels of a thermoelectric generator (TEG) system. Research and analysis tools were developed to incorporate stochasticities in single or multiple variables individually or simultaneously to study both the individual and coupled affects of input design variable stochasticities (probabilities) on output performance variables. Results indicate that normal or Gaussian distribution in input design parameters may not produce Gaussian output parameters. Also when the stochasticities in multiple variables are coupled, the standard deviations in performance parameters are magnified, and their means/averages deviate more from the deterministic values. Although more studies are required to quantify the parameters for design modifications, the studies presented in this paper affirm that incorporating stochastic variability not only aids in understanding the effects of system design variable randomness on expected output performance, but also serves to guide design decisions for optimal TE system design solutions that provide more robust system designs with improved reliability and performance across a range of off-nominal conditions.