Science.gov

Sample records for optimization approach applied

  1. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  2. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  3. Macronutrient modifications of optimal foraging theory: an approach using indifference curves applied to some modern foragers

    SciTech Connect

    Hill, K.

    1988-06-01

    The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.

  4. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    SciTech Connect

    Asplund, Erik; Kluener, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ({Dirac_h}/2{pi})=m{sub e}=e=a{sub 0}= 1, have been used unless otherwise stated.

  5. Optimal control of open quantum systems: a combined surrogate hamiltonian optimal control theory approach applied to photochemistry on surfaces.

    PubMed

    Asplund, Erik; Klüner, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = m(e) = e = a(0) = 1, have been used unless otherwise stated.

  6. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    PubMed

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  7. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  8. Solid mining residues from ni extraction applied as nutrients supplier to anaerobic process: optimal dose approach through Taguchi's methodology.

    PubMed

    Pereda, I; Irusta, R; Montalvo, S; del Valle, J L

    2006-01-01

    The use of solid mining residues (Cola) which contain a certain amount of Ni, Fe and Co, to stimulate anaerobic processes was evaluated. The effect over methane production and chemical oxygen demand (COD) removal efficiency was analysed. The studies were carried out in discontinuous reactors at lab scale under mesophilic conditions until exhausted. 0, 3, 5 and 7 mg Cola l(-1) doses were applied to synthetic wastewater. Volatile fatty acids (VFA) and sucrose were used as substrate, sulphur and nitrogen concentration, being the noise variable. Cola addition at dose around 5 mg I(-1), turned out to be stimulating for the anaerobic process. It was the factor that most influenced on methane production rate together with VFA and high content of volatile suspended solids. In the case of methane yield, pH was the control factor of strongest influence. Higher values of COD removal efficiency were obtained when the reactors were operated with sucrose at relatively low pH and at the smallest concentration of nitrogen and sulphur. Solid residues dose and the type of substrate were the factors that had most influence on COD removal efficiency.

  9. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  10. In silico optimization of pharmacokinetic properties and receptor binding affinity simultaneously: a 'parallel progression approach to drug design' applied to β-blockers.

    PubMed

    Advani, Poonam; Joseph, Blessy; Ambre, Premlata; Pissurlenkar, Raghuvir; Khedkar, Vijay; Iyer, Krishna; Gabhe, Satish; Iyer, Radhakrishnan P; Coutinho, Evans

    2016-01-01

    The present work exploits the potential of in silico approaches for minimizing attrition of leads in the later stages of drug development. We propose a theoretical approach, wherein 'parallel' information is generated to simultaneously optimize the pharmacokinetics (PK) and pharmacodynamics (PD) of lead candidates. β-blockers, though in use for many years, have suboptimal PKs; hence are an ideal test series for the 'parallel progression approach'. This approach utilizes molecular modeling tools viz. hologram quantitative structure activity relationships, homology modeling, docking, predictive metabolism, and toxicity models. Validated models have been developed for PK parameters such as volume of distribution (log Vd) and clearance (log Cl), which together influence the half-life (t1/2) of a drug. Simultaneously, models for PD in terms of inhibition constant pKi have been developed. Thus, PK and PD properties of β-blockers were concurrently analyzed and after iterative cycling, modifications were proposed that lead to compounds with optimized PK and PD. We report some of the resultant re-engineered β-blockers with improved half-lives and pKi values comparable with marketed β-blockers. These were further analyzed by the docking studies to evaluate their binding poses. Finally, metabolic and toxicological assessment of these molecules was done through in silico methods. The strategy proposed herein has potential universal applicability, and can be used in any drug discovery scenario; provided that the data used is consistent in terms of experimental conditions, endpoints, and methods employed. Thus the 'parallel progression approach' helps to simultaneously fine-tune various properties of the drug and would be an invaluable tool during the drug development process.

  11. Multidisciplinary optimization applied to a transport aircraft

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Wrenn, G. A.

    1984-01-01

    Decomposition of a large optimization problem into several smaller subproblems has been proposed as an approach to making large-scale optimization problems tractable. To date, the characteristics of this approach have been tested on problems of limited complexity. The objective of the effort is to demonstrate the application of this multilevel optimization method on a large-scale design study using analytical models comparable to those currently being used in the aircraft industry. The purpose of the design study which is underway to provide this demonstration is to generate a wing design for a transport aircraft which will perform a specified mission with minimum block fuel. A definition of the problem; a discussion of the multilevel composition which is used for an aircraft wing; descriptions of analysis and optimization procedures used at each level; and numerical results obtained to date are included. Computational times required to perform various steps in the process are also given. Finally, a summary of the current status and plans for continuation of this development effort are given.

  12. Assessment of Optimal Interrogation Approaches

    DTIC Science & Technology

    2007-05-01

    4 ( 01-03-2006 Final March 2006 - May 2007 Assessment of Optimal Interrogation Approaches H9C101-6-0051... interrogator . Specifically, DACA wanted the researchers to gather information from "expert" interrogators (referred to as "superior" interrogators ...common approaches/techniques that are employed by the majority of interrogators . U U U U 129 David E. Smith (314) 209-9495 ext 701 Prepared for the

  13. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  14. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  15. Remediation Optimization: Definition, Scope and Approach

    EPA Pesticide Factsheets

    This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.

  16. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  17. A General Approach to Error Estimation and Optimized Experiment Design, Applied to Multislice Imaging of T1in Human Brain at 4.1 T

    NASA Astrophysics Data System (ADS)

    Mason, Graeme F.; Chu, Wen-Jang; Hetherington, Hoby P.

    1997-05-01

    In this report, a procedure to optimize inversion-recovery times, in order to minimize the uncertainty in the measuredT1from 2-point multislice images of the human brain at 4.1 T, is discussed. The 2-point, 40-slice measurement employed inversion-recovery delays chosen based on the minimization of noise-based uncertainties. For comparison of the measuredT1values and uncertainties, 10-point, 3-slice measurements were also acquired. The measuredT1values using the 2-point method were 814, 1361, and 3386 ms for white matter, gray matter, and cerebral spinal fluid, respectively, in agreement with the respectiveT1values of 817, 1329, and 3320 ms obtained using the 10-point measurement. The 2-point, 40-slice method was used to determine theT1in the cortical gray matter, cerebellar gray matter, caudate nucleus, cerebral peduncle, globus pallidus, colliculus, lenticular nucleus, base of the pons, substantia nigra, thalamus, white matter, corpus callosum, and internal capsule.

  18. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  19. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    SciTech Connect

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  20. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  1. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  2. Applying a managerial approach to day surgery.

    PubMed

    Onetti, Alberto

    2008-01-01

    The present article explores the day surgery topic assuming a managerial perspective. If we assume such a perspective, day surgery can be considered as a business model decision care and not just a surgical procedure alternative to the traditional ones requiring patient hospitalization. In this article we highlight the main steps required to develop a strategic approach [Cotta Ramusino E, Onetti A. Strategia d'Impresa. Milano; Il Sole 24 Ore; Second Edition, 2007] at hospital level (Onetti A, Greulich A. Strategic management in hospitals: the balanced scorecard approach. Milano: Giuffé; 2003) and to make day surgery part of it. It means understanding: - how and when day surgery can improve the health care providers' overall performance both in terms of clinical effectiveness and financial results, and, - how to organize and integrate it with the other hospital activities in order to make it work. Approaching day surgery as a business model decision requires to address in advance a list of potential issues and necessitates of continued audit to verify the results. If it does happen, day surgery can be both safe and cost effective and impact positively on surgical patient satisfaction. We propose a sort of "check-up list" useful to hospital managers and doctors that are evaluating the option of introducing day surgery or are trying to optimize it.

  3. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  4. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  5. A Multiple Approach to Evaluating Applied Academics.

    ERIC Educational Resources Information Center

    Wang, Changhua; Owens, Thomas

    The Boeing Company is involved in partnerships with Washington state schools in the area of applied academics. Over the past 3 years, Boeing offered grants to 57 high schools to implement applied mathematics, applied communication, and principles of technology courses. Part 1 of this paper gives an overview of applied academics by examining what…

  6. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  7. Probabilistic-based approach to optimal filtering

    PubMed

    Hannachi

    2000-04-01

    The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system.

  8. A Unified Approach to Optimization

    DTIC Science & Technology

    2014-10-02

    and dynamic programming, logic-based Benders decomposition, and unification of exact and heuristic methods. The publications associated with this...Logic-Based Benders Decomposition Logic-based Benders decomposition (LBBD) has been used for some years to combine CP and MIP, usually by solving the...classical Benders decomposition, but can be any optimization problem. Benders cuts are generated by solving the inference dual of the subproblem

  9. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  10. Applying SF-Based Genre Approaches to English Writing Class

    ERIC Educational Resources Information Center

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  11. Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H

    2015-02-20

    In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications.

  12. Quantum optimal control theory applied to transitions in diatomic molecules

    NASA Astrophysics Data System (ADS)

    Lysebo, Marius; Veseth, Leif

    2014-12-01

    Quantum optimal control theory is applied to control electric dipole transitions in a real multilevel system. The specific system studied in the present work is comprised of a multitude of hyperfine levels in the electronic ground state of the OH molecule. Spectroscopic constants are used to obtain accurate energy eigenstates and electric dipole matrix elements. The goal is to calculate the optimal time-dependent electric field that yields a maximum of the transition probability for a specified initial and final state. A further important objective was to study the detailed quantum processes that take place during such a prescribed transition in a multilevel system. Two specific transitions are studied in detail. The computed optimal electric fields as well as the paths taken through the multitude of levels reveal quite interesting quantum phenomena.

  13. Applying fuzzy clustering optimization algorithm to extracting traffic spatial pattern

    NASA Astrophysics Data System (ADS)

    Hu, Chunchun; Shi, Wenzhong; Meng, Lingkui; Liu, Min

    2009-10-01

    Traditional analytical methods for traffic information can't meet to need of intelligent traffic system. Mining value-add information can deal with more traffic problems. The paper exploits a new clustering optimization algorithm to extract useful spatial clustered pattern for predicting long-term traffic flow from macroscopic view. Considering the sensitivity of initial parameters and easy falling into local extreme in FCM algorithm, the new algorithm applies Particle Swarm Optimization method, which can discovery the globe optimal result, to the FCM algorithm. And the algorithm exploits the union of the clustering validity index and objective function of the FCM algorithm as the fitness function of the PSO algorithm. The experimental result indicates that it is effective and efficient. For fuzzy clustering of road traffic data, it can produce useful spatial clustered pattern. And the clustered centers represent the locations which have heavy traffic flow. Moreover, the parameters of the patterns can provide intelligent traffic system with assistant decision support.

  14. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  15. Applying a gaming approach to IP strategy.

    PubMed

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies.

  16. Optimal online learning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Solla, Sara A.; Winther, Ole

    1999-09-01

    A recently proposed Bayesian approach to online learning is applied to learning a rule defined as a noisy single layer perceptron. In the Bayesian online approach, the exact posterior distribution is approximated by a simple parametric posterior that is updated online as new examples are incorporated to the dataset. In the case of binary weights, the approximate posterior is chosen to be a biased binary distribution. The resulting online algorithm is shown to outperform several other online approaches to this problem.

  17. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  18. Applying EGO to large dimensional optimizations: a wideband fragmented patch example

    NASA Astrophysics Data System (ADS)

    O'Donnell, Teresa H.; Southall, Hugh; Santarelli, Scott; Steyskal, Hans

    2010-04-01

    Efficient Global Optimization (EGO) minimizes expensive cost function evaluations by correlating evaluated parameter sets and respective solutions to model the optimization space. For optimizations requiring destructive testing or lengthy simulations, this computational overhead represents a desirable tradeoff. However, the inspection of the predictor space to determine the next evaluation point can be a time-intensive operation. Although DACE predictor evaluation may be conducted for limited parameters by exhaustive sampling, this method is not extendable to large dimensions. We apply EGO here to the 11-dimensional optimization of a wide-band fragmented patch antenna and present an alternative genetic algorithm approach for selecting the next evaluation point. We compare results achieved with EGO on this optimization problem to previous results achieved with a genetic algorithm.

  19. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  20. The Optimal Treatment Approach to Needs Assessment.

    ERIC Educational Resources Information Center

    Cox, Gary B.; And Others

    1979-01-01

    The Optimal Treatment approach to needs assessment consists of comparing the most desirable set of services for a client with the services actually received. Discrepancies due to unavailable resources are noted and aggregated across clients. Advantages and disadvantages of this and other needs assessment procedures are considered. (Author/RL)

  1. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  2. Optimal cooperative control synthesis applied to a control-configured aircraft

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Innocenti, M.

    1984-01-01

    A multivariable control augmentation synthesis method is presented that is intended to enable the designer to directly optimize pilot opinion rating of the augmented system. The approach involves the simultaneous solution for the augmentation and predicted pilot's compensation via optimal control techniques. The methodology is applied to the control law synthesis for a vehicle similar to the AFTI F16 control-configured aircraft. The resulting dynamics, expressed in terms of eigenstructure and time/frequency responses, are presented with analytical predictions of closed loop tracking performance, pilot compensation, and other predictors of pilot acceptance.

  3. Preconcentration modeling for the optimization of a micro gas preconcentrator applied to environmental monitoring.

    PubMed

    Camara, Malick; Breuil, Philippe; Briand, Danick; Viricelle, Jean-Paul; Pijolat, Christophe; de Rooij, Nico F

    2015-04-21

    This paper presents the optimization of a micro gas preconcentrator (μ-GP) system applied to atmospheric pollution monitoring, with the help of a complete modeling of the preconcentration cycle. Two different approaches based on kinetic equations are used to illustrate the behavior of the micro gas preconcentrator for given experimental conditions. The need for high adsorption flow and heating rate and for low desorption flow and detection volume is demonstrated in this paper. Preliminary to this optimization, the preconcentration factor is discussed and a definition is proposed.

  4. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  5. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  6. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  7. Applying Soft Arc Consistency to Distributed Constraint Optimization Problems

    NASA Astrophysics Data System (ADS)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi

    The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.

  8. Optimization of a Solar Photovoltaic Applied to Greenhouses

    NASA Astrophysics Data System (ADS)

    Nakoul, Z.; Bibi-Triki, N.; Kherrous, A.; Bessenouci, M. Z.; Khelladi, S.

    The global energy consumption and in our country is increasing. The bulk of world energy comes from fossil fuels, whose reserves are doomed to exhaustion and are the leading cause of pollution and global warming through the greenhouse effect. This is not the case of renewable energy that are inexhaustible and from natural phenomena. For years, unanimously, solar energy is in the first rank of renewable energies .The study of energetic aspect of a solar power plant is the best way to find the optimum of its performances. The study on land with real dimensions requires a long time and therefore is very costly, and more results are not always generalizable. To avoid these drawbacks we opted for a planned study on computer only, using the software 'Matlab' by modeling different components for a better sizing and simulating all energies to optimize profitability taking into account the cost. The result of our work applied to sites of Tlemcen and Bouzareah led us to conclude that the energy required is a determining factor in the choice of components of a PV solar power plant.

  9. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  10. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  11. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  12. Optimizing Multicompression Approaches to Elasticity Imaging

    PubMed Central

    Du, Huini; Liu, Jie; Pellot-Barakat, Claire; Insana, Michael F.

    2009-01-01

    Breast lesion visibility in static strain imaging ultimately is noise limited. When correlation and related techniques are applied to estimate local displacements between two echo frames recorded before and after a small deformation, target contrast increases linearly with the amount of deformation applied. However, above some deformation threshold, decorrelation noise increases more than contrast such that lesion visibility is severely reduced. Multicompression methods avoid this problem by accumulating displacements from many small deformations to provide the same net increase in lesion contrast as one large deformation but with minimal decorrelation noise. Unfortunately, multicompression approaches accumulate echo noise (electronic and sampling) with each deformation step as contrast builds so that lesion visibility can be reduced again if the applied deformation increment is too small. This paper uses signal models and analysis techniques to develop multicompression strategies that minimize strain image noise. The analysis predicts that displacement variance is minimal in elastically homogeneous media when the applied strain increment is 0.0035. Predictions are verified experimentally with gelatin phantoms. For in vivo breast imaging, a strain increment as low as 0.0015 is recommended for minimum noise because of the greater elastic heterogeneity of breast tissue. PMID:16471435

  13. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  14. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  15. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  16. Classical mechanics approach applied to analysis of genetic oscillators.

    PubMed

    Vasylchenkova, Anastasiia; Mraz, Miha; Zimic, Nikolaj; Moskon, Miha

    2016-04-05

    Biological oscillators present a fundamental part of several regulatory mechanisms that control the response of various biological systems. Several analytical approaches for their analysis have been reported recently. They are, however, limited to only specific oscillator topologies and/or to giving only qualitative answers, i.e., is the dynamics of an oscillator given the parameter space oscillatory or not. Here we present a general analytical approach that can be applied to the analysis of biological oscillators. It relies on the projection of biological systems to classical mechanics systems. The approach is able to provide us with relatively accurate results in the meaning of type of behaviour system reflects (i.e. oscillatory or not) and periods of potential oscillations without the necessity to conduct expensive numerical simulations. We demonstrate and verify the proposed approach on three different implementations of amplified negative feedback oscillator.

  17. Stochastic real-time optimal control: A pseudospectral approach for bearing-only trajectory optimization

    NASA Astrophysics Data System (ADS)

    Ross, Steven M.

    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.

  18. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  19. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  20. Applying a weed risk assessment approach to GM crops.

    PubMed

    Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe

    2014-12-01

    Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.

  1. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  2. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  3. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  4. Applying Genetic Algorithms To Query Optimization in Document Retrieval.

    ERIC Educational Resources Information Center

    Horng, Jorng-Tzong; Yeh, Ching-Chang

    2000-01-01

    Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)

  5. Neoliberal Optimism: Applying Market Techniques to Global Health.

    PubMed

    Mei, Yuyang

    2016-09-23

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  6. A Collective Neurodynamic Approach to Constrained Global Optimization.

    PubMed

    Yan, Zheng; Fan, Jianchao; Wang, Jun

    2016-04-01

    Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush-Kuhn-Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.

  7. Using Response Surface Methodology as an Approach to Understand and Optimize Operational Air Power

    DTIC Science & Technology

    2010-01-01

    Introduction to Taguchi Methodology . In Taguchi Methods : Proceedings of the 1988 European Conference, 1-14. London: Elsevier Applied Science. Box G. E. and N... Methodology As an Approach to Understand and Optimize Operational Air Power Marvin L. Simpson, Jr. Resit Unal Report Documentation Page Form...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Using Response Surface Methodology As an Approach to Understand and Optimize Operational Air Power

  8. Examining the Bernstein global optimization approach to optimal power flow problem

    NASA Astrophysics Data System (ADS)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  9. Stochastic Optimal Control and Linear Programming Approach

    SciTech Connect

    Buckdahn, R.; Goreac, D.; Quincampoix, M.

    2011-04-15

    We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.

  10. Applying riding-posture optimization on bicycle frame design.

    PubMed

    Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee

    2015-11-01

    Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle.

  11. Beyond parallax barriers: applying formal optimization methods to multilayer automultiscopic displays

    NASA Astrophysics Data System (ADS)

    Lanman, Douglas; Wetzstein, Gordon; Hirsch, Matthew; Heidrich, Wolfgang; Raskar, Ramesh

    2012-03-01

    This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second, we consider general multi-layer architectures containing three or more light-attenuating layers. This line of research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile applications. The Layered 3D display extends this approach to multi-layered displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.

  12. A linear programming approach for optimal contrast-tone mapping.

    PubMed

    Wu, Xiaolin

    2011-05-01

    This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

  13. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  14. Applying optimal model selection in principal stratification for causal inference.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2013-05-20

    Noncompliance to treatment allocation is a key source of complication for causal inference. Efficacy estimation is likely to be compounded by the presence of noncompliance in both treatment arms of clinical trials where the intention-to-treat estimate provides a biased estimator for the true causal estimate even under homogeneous treatment effects assumption. Principal stratification method has been developed to address such posttreatment complications. The present work extends a principal stratification method that adjusts for noncompliance in two-treatment arms trials by developing model selection for covariates predicting compliance to treatment in each arm. We apply the method to analyse data from the Esprit study, which was conducted to ascertain whether unopposed oestrogen (hormone replacement therapy) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We adjust for noncompliance in both treatment arms under a Bayesian framework to produce causal risk ratio estimates for each principal stratum. For mild values of a sensitivity parameter and using separate predictors of compliance in each arm, principal stratification results suggested that compliance with hormone replacement therapy only would reduce the risk for death and myocardial reinfarction by about 47% and 25%, respectively, whereas compliance with either treatment would reduce the risk for death by 13% and reinfarction by 60% among the most compliant. However, the results were sensitive to the user-defined sensitivity parameter.

  15. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  16. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  17. Risks, scientific uncertainty and the approach of applying precautionary principle.

    PubMed

    Lo, Chang-fa

    2009-03-01

    The paper intends to clarify the nature and aspects of risks and scientific uncertainty and also to elaborate the approach of application of precautionary principle for the purpose of handling the risk arising from scientific uncertainty. It explains the relations between risks and the application of precautionary principle at international and domestic levels. In the situations where an international treaty has admitted the precautionary principle and in the situation where there is no international treaty admitting the precautionary principle or enumerating the conditions to take measures, the precautionary principle has a role to play. The paper proposes a decision making tool, containing questions to be asked, to help policymakers to apply the principle. It also proposes a "weighing and balancing" procedure to help them decide the contents of the measure to cope with the potential risk and to avoid excessive measures.

  18. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  19. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations.

  20. Practical approach to apply range image sensors in machine automation

    NASA Astrophysics Data System (ADS)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  1. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  2. Optimal expansion of water quality monitoring network by fuzzy optimization approach.

    PubMed

    Ning, Shu-Kuang; Chang, Ni-Bin

    2004-02-01

    River reaches are frequently classified with respect to various mode of water utilization depending on the quantity and quality of water resources available at different location. Monitoring of water quality in a river system must collect both temporal and spatial information for comparison with respect to the preferred situation of a water body based on different scenarios. Designing a technically sound monitoring network, however, needs to identify a suite of significant planning objectives and consider a series of inherent limitations simultaneously. It would rely on applying an advanced systems analysis technique via an integrated simulation-optimization approach to meet the ultimate goal. This article presents an optimal expansion strategy of water quality monitoring stations for fulfilling a long-term monitoring mission under an uncertain environment. The planning objectives considered in this analysis are to increase the protection degree in the proximity of the river system with higher population density, to enhance the detection capability for lower compliance areas, to promote the detection sensitivity by better deployment and installation of monitoring stations, to reflect the levels of utilization potential of water body at different locations, and to monitor the essential water quality in the upper stream areas of all water intakes. The constraint set contains the limitations of budget, equity implication, and the detection sensitivity in the water environment. A fuzzy multi-objective evaluation framework that reflects the uncertainty embedded in decision making is designed for postulating and analyzing the underlying principles of optimal expansion strategy of monitoring network. The case study being organized in South Taiwan demonstrates a set of more robust and flexible expansion alternatives in terms of spatial priority. Such an approach uniquely indicates the preference order of each candidate site to be expanded step-wise whenever the budget

  3. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  4. Applying response surface methodology to optimize nimesulide permeation from topical formulation.

    PubMed

    Shahzad, Yasser; Afreen, Urooj; Nisar Hussain Shah, Syed; Hussain, Talib

    2013-01-01

    Nimesulide is a non-steroidal anti-inflammatory drug that acts through selective inhibition of COX-2 enzyme. Poor bioavailability of this drug may leads to local toxicity at the site of aggregation and hinders reaching desired therapeutic effects. This study aimed at formulating and optimizing topically applied lotions of nimesulide using an experimental design approach, namely response surface methodology. The formulated lotions were evaluated for pH, viscosity, spreadability, homogeneity and in vitro permeation studies through rabbit skin using Franz diffusion cells. Data were fitted to linear, quadratic and cubic models and best fit model was selected to investigate the influence of permeation enhancers, namely propylene glycol and polyethylene glycol on percutaneous absorption of nimesulide from lotion formulations. The best fit quadratic model explained that the enhancer combination at equal levels significantly increased the flux and permeability coefficient. The model was validated by comparing the permeation profile of optimized formulations' predicted and experimental response values, thus, endorsing the prognostic ability of response surface methodology.

  5. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  6. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  7. Optimization approaches to volumetric modulated arc therapy planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-01

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  8. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  9. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  10. A comparison of reanalysis techniques: applying optimal interpolation and Ensemble Kalman Filtering to improve air quality monitoring at mesoscale.

    PubMed

    Candiani, Gabriele; Carnevale, Claudio; Finzi, Giovanna; Pisoni, Enrico; Volta, Marialuisa

    2013-08-01

    To fulfill the requirements of the 2008/50 Directive, which allows member states and regional authorities to use a combination of measurement and modeling to monitor air pollution concentration, a key approach to be properly developed and tested is the data assimilation one. In this paper, with a focus on regional domains, a comparison between optimal interpolation and Ensemble Kalman Filter is shown, to stress pros and drawbacks of the two techniques. These approaches can be used to implement a more accurate monitoring of the long-term pollution trends on a geographical domain, through an optimal combination of all the available sources of data. The two approaches are formalized and applied for a regional domain located in Northern Italy, where the PM10 level which is often higher than EU standard limits is measured.

  11. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    PubMed

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  12. Combustion efficiency optimization and virtual testing: A data-mining approach

    SciTech Connect

    Kusiak, A.; Song, Z.

    2006-08-15

    In this paper, a data-mining approach is applied to optimize combustion efficiency of a coal-fired boiler. The combustion process is complex, nonlinear, and nonstationary. A virtual testing procedure is developed to validate the results produced by the optimization methods. The developed procedure quantifies improvements in the combustion efficiency without performing live testing, which is expensive and time consuming. The ideas introduced in this paper are illustrated with an industrial case study.

  13. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  14. Optimization strategies in the modelling of SG-SMB applied to separation of phenylalanine and tryptophan

    NASA Astrophysics Data System (ADS)

    Diógenes Tavares Câmara, Leôncio

    2014-03-01

    The solvent-gradient simulated moving bed process (SG-SMB) is the new tendency in the performance improvement if compared to the traditional isocratic solvent conditions. In such SG-SMB process the modulation of the solvent strength leads to significant increase in the purities and productivity followed by reduction in the solvent consumption. A stepwise modelling approach was utilized in the representation of the interconnected chromatographic columns of the system combined with a lumped mass transfer model between the solid and liquid phase. The influence of the solvent modifier was considered applying the Abel model which takes into account the effect of modifier volume fraction over the partition coefficient. Correlation models of the mass transfer parameters were obtained through the retention times of the solutes according to the volume fraction of modifier. The modelling and simulations were carried out and compared to the experimental SG-SMB separation unit of the amino acids Phenylalanine and Tryptophan. The simulation results showed the great potential of the proposed modelling approach in the representation of such complex systems. The simulations showed great agreement fitting the experimental data of the amino acids concentrations both at the extract as well as at the raffinate. A new optimization strategy was proposed in the determination of the best operating conditions which uses the phi-plot concept.

  15. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  16. A data-intensive approach to mechanistic elucidation applied to chiral anion catalysis

    PubMed Central

    Milo, Anat; Neel, Andrew J.; Toste, F. Dean; Sigman, Matthew S.

    2015-01-01

    Knowledge of chemical reaction mechanisms can facilitate catalyst optimization, but extracting that knowledge from a complex system is often challenging. Here we present a data-intensive method for deriving and then predictively applying a mechanistic model of an enantioselective organic reaction. As a validating case study, we selected an intramolecular dehydrogenative C-N coupling reaction, catalyzed by chiral phosphoric acid derivatives, in which catalyst-substrate association involves weak, non-covalent interactions. Little was previously understood regarding the structural origin of enantioselectivity in this system. Catalyst and substrate substituent effects were probed by systematic physical organic trend analysis. Plausible interactions between the substrate and catalyst that govern enantioselectivity were identified and supported experimentally, indicating that such an approach can afford an efficient means of leveraging mechanistic insight to optimize catalyst design. PMID:25678656

  17. Optimization methods applied to the aerodynamic design of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Bingham, G. J.; Riley, M. F.

    1985-01-01

    This paper describes a formal optimization procedure for helicopter rotor blade designs which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares designs and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.

  18. Optimization methods applied to the aerodynamic design of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Bingham, Gene J.; Riley, Michael F.

    1987-01-01

    Described is a formal optimization procedure for helicopter rotor blade design which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general-purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares design and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.

  19. Applying Current Approaches to the Teaching of Reading

    ERIC Educational Resources Information Center

    Villanueva de Debat, Elba

    2006-01-01

    This article discusses different approaches to reading instruction for EFL learners based on theoretical frameworks. The author starts with the bottom-up approach to reading instruction, and briefly explains phonics and behaviorist ideas that inform this instructional approach. The author then explains the top-down approach and the new cognitive…

  20. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  1. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  2. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  3. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

  4. Target-classification approach applied to active UXO sites

    NASA Astrophysics Data System (ADS)

    Shubitidze, F.; Fernández, J. P.; Shamatava, Irma; Barrowes, B. E.; O'Neill, K.

    2013-06-01

    This study is designed to illustrate the discrimination performance at two UXO active sites (Oklahoma's Fort Sill and the Massachusetts Military Reservation) of a set of advanced electromagnetic induction (EMI) inversion/discrimination models which include the orthonormalized volume magnetic source (ONVMS), joint diagonalization (JD), and differential evolution (DE) approaches and whose power and flexibility greatly exceed those of the simple dipole model. The Fort Sill site is highly contaminated by a mix of the following types of munitions: 37-mm target practice tracers, 60-mm illumination mortars, 75-mm and 4.5'' projectiles, 3.5'', 2.36'', and LAAW rockets, antitank mine fuzes with and without hex nuts, practice MK2 and M67 grenades, 2.5'' ballistic windshields, M2A1-mines with/without bases, M19-14 time fuzes, and 40-mm practice grenades with/without cartridges. The site at the MMR site contains targets of yet different sizes. In this work we apply our models to EMI data collected using the MetalMapper (MM) and 2 × 2 TEMTADS sensors. The data for each anomaly are inverted to extract estimates of the extrinsic and intrinsic parameters associated with each buried target. (The latter include the total volume magnetic source or NVMS, which relates to size, shape, and material properties; the former includes location, depth, and orientation). The estimated intrinsic parameters are then used for classification performed via library matching and the use of statistical classification algorithms; this process yielded prioritized dig-lists that were submitted to the Institute for Defense Analyses (IDA) for independent scoring. The models' classification performance is illustrated and assessed based on these independent evaluations.

  5. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to

  6. Finite strain response of crimped fibers under uniaxial traction: An analytical approach applied to collagen

    NASA Astrophysics Data System (ADS)

    Marino, Michele; Wriggers, Peter

    2017-01-01

    Composite materials reinforced by crimped fibers intervene in a number of advanced structural applications. Accordingly, constitutive equations describing their anisotropic behavior and explicitly accounting for fiber properties are needed for modeling and design purposes. To this aim, the finite strain response of crimped beams under uniaxial traction is herein addressed by obtaining analytical relationships based on the Principle of Virtual Works. The model is applied to collagen fibers in soft biological tissues, coupling geometric nonlinearities related to fiber crimp with material nonlinearities due to nanoscale mechanisms. Several numerical applications are presented, addressing the influence of geometric and material features. Available experimental data for tendons are reproduced, integrating the proposed approach within an optimization procedure for data fitting. The obtained results highlight the effectiveness of the proposed approach in correlating fibers structure with composite material mechanics.

  7. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  8. A multidisciplinary approach to optimization of controlled space structures

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Padula, Sharon L.; Graves, Philip C.; James, Benjamin B.

    1990-01-01

    A fundamental problem facing controls-structures analysts is a means of determining the trade-offs between structural design parameters and control design parameters in meeting some particular performance criteria. Developing a general optimization-based design methodology integrating the disciplines of structural dynamics and controls is a logical approach. The objective of this study is to develop such a method. Classical design methodology involves three phases. The first is structural optimization, wherein structural member sizes are varied to minimize structural mass, subject to open-loop frequency constraints. The next phase integrates control and structure design with control gains as additional design variables. The final phase is analysis of the 'optimal' integrated design phase considering 'real' actuators and 'standard' member sizes. The control gains could be further optimized for fixed structure, and actuator saturation constraints could be imposed. However, such an approach does not take full advantage of opportunities to tailor the structure and control system design as one system.

  9. A simple approach for predicting time-optimal slew capability

    NASA Astrophysics Data System (ADS)

    King, Jeffery T.; Karpenko, Mark

    2016-03-01

    The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.

  10. Reliability based structural optimization - A simplified safety index approach

    NASA Technical Reports Server (NTRS)

    Reddy, Mahidhar V.; Grandhi, Ramana V.; Hopkins, Dale A.

    1993-01-01

    A probabilistic optimal design methodology for complex structures modelled with finite element methods is presented. The main emphasis is on developing probabilistic analysis tools suitable for optimization. An advanced second-moment method is employed to evaluate the failure probability of the performance function. The safety indices are interpolated using the information at mean and most probable failure point. The minimum weight design with an improved safety index limit is achieved by using the extended interior penalty method of optimization. Numerical examples covering beam and plate structures are presented to illustrate the design approach. The results obtained by using the proposed approach are compared with those obtained by using the existing probabilistic optimization techniques.

  11. KL-optimal experimental design for discriminating between two growth models applied to a beef farm.

    PubMed

    Campos-Barreiro, Santiago; López-Fidalgo, Jesús

    2016-02-01

    The body mass growth of organisms is usually represented in terms of what is known as ontogenetic growth models, which represent the relation of dependence between the mass of the body and time. The paper is concerned with a problem of finding an optimal experimental design for discriminating between two competing mass growth models applied to a beef farm. T-optimality was first introduced for discrimination between models but in this paper, KL-optimality based on the Kullback-Leibler distance is used to deal with correlated obsevations since, in this case, observations on a particular animal are not independent.

  12. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  13. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  14. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  15. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming.

    PubMed

    Qin, Sitian; Le, Xinyi; Wang, Jun

    2016-08-19

    This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

  16. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  17. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  18. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  19. A general optimization method applied to a vdW-DF functional for water

    NASA Astrophysics Data System (ADS)

    Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi

    In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.

  20. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  1. A comprehensive business planning approach applied to healthcare.

    PubMed

    Calpin-Davies, P

    The White Paper The New NHS: Modern, Dependable (DoH 1997) clearly expects nurses, in partnership with other professionals, to contribute to the planning and shaping of future healthcare services. This article proposes that comprehensive models of alternative planning frameworks, when applied to healthcare services, can provide nurses with an understanding of the skills they require to participate in the planning process.

  2. Teaching Social Science Research: An Applied Approach Using Community Resources.

    ERIC Educational Resources Information Center

    Gilliland, M. Janice; And Others

    A four-week summer project for 100 rural tenth graders in the University of Alabama's Biomedical Sciences Preparation Program (BioPrep) enabled students to acquire and apply social sciences research skills. The students investigated drinking water quality in three rural Alabama counties by interviewing local officials, health workers, and…

  3. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  4. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  5. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  6. Pilot-testing an applied competency-based approach to health human resources planning.

    PubMed

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Rob; Langley, Joanne; Hickey, Marjorie; Cook, Amanda

    2013-10-01

    A competency-based approach to health human resources (HHR) planning is one that explicitly considers the spectrum of knowledge, skills and judgement (competencies) required for the health workforce based on the health needs of the relevant population in some specific circumstances. Such an approach is of particular benefit to planners challenged to make optimal use of limited HHR as it allows them to move beyond simply estimating numbers of certain professionals required and plan instead according to the unique mix of competencies available from the existing health workforce. This kind of flexibility is particularly valuable in contexts where healthcare providers are in short supply generally (e.g. in many developing countries) or temporarily due to a surge in need (e.g. a pandemic or other disease outbreak). A pilot application of this approach using the context of an influenza pandemic in one health district of Nova Scotia, Canada, is described, and key competency gaps identified. The approach is also being applied using other conditions in other Canadian jurisdictions and in Zambia.

  7. Laser therapy applying the differential approaches and biophotometrical elements

    NASA Astrophysics Data System (ADS)

    Mamedova, F. M.; Akbarova, Ju. A.; Umarova, D. A.; Yudin, G. A.

    1995-04-01

    The aim of the present paper is the presentation of biophotometrical data obtained from various anatomic-topographical mouth areas to be used for the development of differential approaches to laser therapy in dentistry. Biophotometrical measurements were carried out using a portative biophotometer, as a portion of a multifunctional equipping system of laser therapy, acupuncture and biophotometry referred to as 'Aura-laser'. The results of biophotometrical measurements allow the implementation of differential approaches to laser therapy of parodontitis and mucous mouth tissue taking their clinic form and rate of disease into account.

  8. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    ERIC Educational Resources Information Center

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  9. Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.

    ERIC Educational Resources Information Center

    Deonanan, Carlton R.; Deonanan, Venus E.

    This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…

  10. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  11. Dialogical Approach Applied in Group Counselling: Case Study

    ERIC Educational Resources Information Center

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  12. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  13. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme.

  14. Applying a User-Centered Approach to Interactive Visualisation Design

    NASA Astrophysics Data System (ADS)

    Wassink, Ingo; Kulyk, Olga; van Dijk, Betsy; van der Veer, Gerrit; van der Vet, Paul

    Analysing users in their context of work and finding out how and why they use different information resources is essential to provide interactive visualisation systems that match their goals and needs. Designers should actively involve the intended users throughout the whole process. This chapter presents a user-centered approach for the design of interactive visualisation systems. We describe three phases of the iterative visualisation design process: the early envisioning phase, the global specification phase, and the detailed specification phase. The whole design cycle is repeated until some criterion of success is reached. We discuss different techniques for the analysis of users, their tasks and domain. Subsequently, the design of prototypes and evaluation methods in visualisation practice are presented. Finally, we discuss the practical challenges in design and evaluation of collaborative visualisation environments. Our own case studies and those of others are used throughout the whole chapter to illustrate various approaches.

  15. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  16. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  17. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  18. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  19. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  20. Approach to optimization of low-power Stirling cryocoolers

    SciTech Connect

    Sullivan, D.B.; Radebaugh, R.; Daney, D.E.; Zimmerman, J.E.

    1983-12-01

    A method for optimizing the design (shape of the displacer) of low power Stirling cryocoolers relative to the power required to operate the systems is described. A variational calculation which includes static conduction, shuttle and radiation losses, as well as regenerator inefficiency, was completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped displacer cryocoolers indicates reasonable agreement.

  1. An approach to optimization of low-power Stirling cryocoolers

    NASA Technical Reports Server (NTRS)

    Sullivan, D. B.; Radebaugh, R.; Daney, D. E.; Zimmerman, J. E.

    1983-01-01

    A method for optimizing the design (shape of the displacer) of low power Stirling cryocoolers relative to the power required to operate the systems is described. A variational calculation which includes static conduction, shuttle and radiation losses, as well as regenerator inefficiency, was completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped displacer cryocoolers indicates reasonable agreement.

  2. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  3. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics.

  4. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  5. An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis

    DTIC Science & Technology

    2013-03-01

    AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks

  6. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  7. Total Risk Approach in Applying PRA to Criticality Safety

    SciTech Connect

    Huang, S T

    2005-03-24

    As nuclear industry continues marching from an expert-base support to more procedure-base support, it is important to revisit the total risk concept to criticality safety. A key objective of criticality safety is to minimize total criticality accident risk. The purpose of this paper is to assess key constituents of total risk concept pertaining to criticality safety from an operations support perspective and to suggest a risk-informed means of utilizing criticality safety resources for minimizing total risk. A PRA methodology was used to assist this assessment. The criticality accident history was assessed to provide a framework for our evaluation. In supporting operations, the work of criticality safety engineers ranges from knowing the scope and configurations of a proposed operation, performing criticality hazards assessment to derive effective controls, assisting in training operators, response to floor questions, surveillance to ensure implementation of criticality controls, and response to criticality mishaps. In a compliance environment, the resource of criticality safety engineers is increasingly being directed towards tedious documentation effort to meet some regulatory requirements to the effect of weakening the floor support for criticality safety. By applying a fault tree model to identify the major contributors of criticality accidents, a total risk picture is obtained to address relative merits of various actions. Overall, human failure is the key culprit in causing criticality accidents. Factors such as failure to follow procedures, lacks of training, lack of expert support at the floor level etc. are main contributors. Other causes may include lack of effective criticality controls such as inadequate criticality safety evaluation. Not all of the causes are equally important in contributing to criticality mishaps. Applying the limited resources to strengthen the weak links would reduce risk more than continuing emphasis on the strong links of

  8. Policy planning under uncertainty: efficient starting populations for simulation-optimization methods applied to municipal solid waste management.

    PubMed

    Huang, Gordon H; Linton, Jonathan D; Yeomans, Julian Scott; Yoogalingam, Reena

    2005-10-01

    Evolutionary simulation-optimization (ESO) techniques can be adapted to model a wide variety of problem types in which system components are stochastic. Grey programming (GP) methods have been previously applied to numerous environmental planning problems containing uncertain information. In this paper, ESO is combined with GP for policy planning to create a hybrid solution approach named GESO. It can be shown that multiple policy alternatives meeting required system criteria, or modelling-to-generate-alternatives (MGA), can be quickly and efficiently created by applying GESO to this case data. The efficacy of GESO is illustrated using a municipal solid waste management case taken from the regional municipality of Hamilton-Wentworth in the Province of Ontario, Canada. The MGA capability of GESO is especially meaningful for large-scale real-world planning problems and the practicality of this procedure can easily be extended from MSW systems to many other planning applications containing significant sources of uncertainty.

  9. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    DTIC Science & Technology

    2009-09-01

    optimization through the evolution of a dynamical system. Some existing approaches do not fit under the framework we propose in this chap- ter. A...parameters are coupled among robots, we must consider the evolution of all the robots’ parameters together. Let = [ a]. (4.39) be a concatenated...dynamics * Synchronous evolution of equa- tions * Exact Voronoi cells computed from exact positions of all Voronoi neighbors * Exact integrals over

  10. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  11. New Approach to Ultrasonic Spectroscopy Applied to Flywheel Rotors

    NASA Technical Reports Server (NTRS)

    Harmon, Laura M.; Baaklini, George Y.

    2002-01-01

    Flywheel energy storage devices comprising multilayered composite rotor systems are being studied extensively for use in the International Space Station. A flywheel system includes the components necessary to store and discharge energy in a rotating mass. The rotor is the complete rotating assembly portion of the flywheel, which is composed primarily of a metallic hub and a composite rim. The rim may contain several concentric composite rings. This article summarizes current ultrasonic spectroscopy research of such composite rings and rims and a flat coupon, which was manufactured to mimic the manufacturing of the rings. Ultrasonic spectroscopy is a nondestructive evaluation (NDE) method for material characterization and defect detection. In the past, a wide bandwidth frequency spectrum created from a narrow ultrasonic signal was analyzed for amplitude and frequency changes. Tucker developed and patented a new approach to ultrasonic spectroscopy. The ultrasonic system employs a continuous swept-sine waveform and performs a fast Fourier transform on the frequency spectrum to create the spectrum resonance spacing domain, or fundamental resonant frequency. Ultrasonic responses from composite flywheel components were analyzed at Glenn to assess this NDE technique for the quality assurance of flywheel applications.

  12. Applying electrical utility least-cost approach to transportation planning

    SciTech Connect

    McCoy, G.A.; Growdon, K.; Lagerberg, B.

    1994-09-01

    Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.

  13. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  14. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  15. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  16. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  17. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles

    PubMed Central

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time. PMID:28033385

  18. Geophysical approaches applied in the ancient theatre of Demetriada, Volos

    NASA Astrophysics Data System (ADS)

    Sarris, Apostolos; Papadopoulos, Nikos; Déderix, Sylviane; Salvi, Maria-Christina

    2013-08-01

    The city of Demetriada was constructed around 294-292 BC and became a stronghold of the Macedonian navy fleet, whereas in the Roman period it experienced significant growth and blossoming. The ancient theatre of the town was constructed at the same time with the foundation of the city, without being used for 2 centuries (1st ce. BC - 1st ce. A.D.) and being completely abandoned after the 4th ce. A.D., to be used only as a quarry for extraction of building material for Christian basilicas in the area. The theatre was found in 1809 and excavations took place in various years since 1907. Geophysical approaches were exploited recently in an effort to map the subsurface of the surrounding area of the theatre and help the reconstruction works of it. Magnetic gradiometry, Ground Penetrating Radar (GPR) and Electrical Resistivity Tomogrpahy (ERT) techniques were employed for mapping the area of the orchestra and the scene of the theatre, together with the area extending to the south of the theatre. A number of features were recognized by the magnetic techniques including older excavation trenches and the pilar of the stoa of the proscenium. The different occupation phases of the area have been manifested through the employment of tomographic and stratigraphic geophysical techniques like three-dimensional ERT and GPR. Architectural orthogonal structures aligned in a S-N direction have been correlated to the already excavated buildings of the ceramic workshop. The workshop seems to expand in a large section of the area which was probably constructed after the final abandonment of the theatre.

  19. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    PubMed

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  20. Innovization procedure applied to a multi-objective optimization of a biped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2013-10-01

    This paper proposes an Innovization procedure approach for a bio-inspired biped gait locomotion controller. We combine a multi-objective evolutionary algorithm and a bio-inspired Central Patterns Generator locomotion controller to generates the necessary limb movements to perform the walking gait of a biped robot. The search for the best set of CPG parameters is optimized by considering multiple objectives along a staged evolution. An innovation analysis is issued to verify relationships between the parameters and the objectives and between objectives themselves in order to find relevant motor behaviors characteristics. The simulation results show the effectiveness of the proposed approach.

  1. EMG assisted optimization: a hybrid approach for estimating muscle forces in an indeterminate biomechanical model.

    PubMed

    Cholewicki, J; McGill, S M

    1994-10-01

    There are two basic approaches to estimate individual muscle forces acting on a joint, given the indeterminacy of moment balance equations: optimization and electromyography (EMG) assisted. Each approach is characterized by unique advantages and liabilities. With this in mind, a new hybrid method which combines the advantages of both of these traditional approaches, termed 'EMG assisted optimization' (EMGAO), was described. In this method, minimal adjustments are applied to the individual muscle forces estimated from EMG, so that all moment equilibrium equations are satisfied in three dimensions. The result is the best possible match between physiologically observed muscle activation patterns and the predicted forces, while satisfying the moment constraints about all three joint axes. Several forms of the objective function are discussed and their effect on individual muscle adjustments is illustrated in a simple two-dimensional example.

  2. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  3. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    SciTech Connect

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; Newlands, Nathaniel K.

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach to address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.

  4. Applying the Taguchi method to optimize sumatriptan succinate niosomes as drug carriers for skin delivery.

    PubMed

    González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M

    2012-10-01

    Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment.

  5. Continuous intensity map optimization (CIMO): a novel approach to leaf sequencing in step and shoot IMRT.

    PubMed

    Cao, Daliang; Earl, Matthew A; Luan, Shuang; Shepard, David M

    2006-04-01

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases were selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle3 treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.

  6. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  7. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  8. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  9. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  10. Phase retrieval with transverse translation diversity: a nonlinear optimization approach.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-05-12

    We develop and test a nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern. Analytical expressions for the gradient of a squared-error metric with respect to the object, illumination and translations allow joint optimization of the object and system parameters. This approach achieves superior reconstructions, with respect to a previously reported technique [H. M. L. Faulkner and J. M. Rodenburg, Phys. Rev. Lett. 93, 023903 (2004)], when the system parameters are inaccurately known or in the presence of noise. Applicability of this method for samples that are smaller than the illumination pattern is explored.

  11. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  12. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  13. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    DTIC Science & Technology

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  14. Approaching direct optimization of as-built lens performance

    NASA Astrophysics Data System (ADS)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  15. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  16. Partial constraint satisfaction approaches for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Ferreira, Andre R.; Teegavarapu, Ramesh S. V.

    2012-09-01

    Optimal operation models for a hydropower system using partial constraint satisfaction (PCS) approaches are proposed and developed in this study. The models use mixed integer nonlinear programming (MINLP) formulations with binary variables. The models also integrate a turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream water quality impairment. New PCS-based models for hydropower optimization formulations are developed using binary and continuous evaluator functions to maximize the constraint satisfaction. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to solve the optimization formulations. Decision maker's preferences towards power production targets and water quality improvements are incorporated using partial satisfaction constraints to obtain compromise operating rules for a multi-objective reservoir operation problem dominated by conflicting goals of energy production, water quality and consumptive water uses.

  17. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  18. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  19. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  20. Silanization of glass chips—A factorial approach for optimization

    NASA Astrophysics Data System (ADS)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  1. A hypothesis-driven approach to optimize field campaigns

    NASA Astrophysics Data System (ADS)

    Nowak, Wolfgang; Rubin, Yoram; de Barros, Felipe P. J.

    2012-06-01

    Most field campaigns aim at helping in specified scientific or practical tasks, such as modeling, prediction, optimization, or management. Often these tasks involve binary decisions or seek answers to yes/no questions under uncertainty, e.g., Is a model adequate? Will contamination exceed a critical level? In this context, the information needs of hydro(geo)logical modeling should be satisfied with efficient and rational field campaigns, e.g., because budgets are limited. We propose a new framework to optimize field campaigns that defines the quest for defensible decisions as the ultimate goal. The key steps are to formulate yes/no questions under uncertainty as Bayesian hypothesis tests, and then use the expected failure probability of hypothesis testing as objective function. Our formalism is unique in that it optimizes field campaigns for maximum confidence in decisions on model choice, binary engineering or management decisions, or questions concerning compliance with environmental performance metrics. It is goal oriented, recognizing that different models, questions, or metrics deserve different treatment. We use a formal Bayesian scheme called PreDIA, which is free of linearization, and can handle arbitrary data types, scientific tasks, and sources of uncertainty (e.g., conceptual, physical, (geo)statistical, measurement errors). This reduces the bias due to possibly subjective assumptions prior to data collection and improves the chances of successful field campaigns even under conditions of model uncertainty. We illustrate our approach on two instructive examples from stochastic hydrogeology with increasing complexity.

  2. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  3. Optimization of applied non-axisymmetric magnetic perturbations using multimodal plasma response on DIII-D

    NASA Astrophysics Data System (ADS)

    Weisberg, D. B.; Paz-Soldan, C.; Lanctot, M. J.; Strait, E. J.; Evans, T. E.

    2016-10-01

    The plasma response to proposed 3D coil geometries in the DIII-D tokamak is investigated using the linear MHD plasma response code MARS-F. An extensive examination of low- and high-field side coil arrangements shows the potential to optimize the coupling between imposed non-axisymmetric magnetic perturbations and the total plasma response by varying the toroidal and poloidal spectral content of the applied field. Previous work has shown that n=2 and n=3 perturbations can suppress edge-localized modes (ELMs) in cases where the applied field's coupling to resonant surfaces is enhanced by amplifying marginally-stable kink modes. This research is extended to higher n-number configurations of 2 to 3 rows with up to 12 coils each in order to advance the physical understanding and optimization of both the resonant and non-resonant responses. Both in- and ex-vessel configurations are considered. The plasma braking torque is also analyzed, and coil geometries with favorable plasma coupling characteristics are discussed. Work supported by GA internal funds.

  4. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  8. An Information-Centric Approach to Autonomous Trajectory Planning Utilizing Optimal Control Techniques

    DTIC Science & Technology

    2009-09-01

    INFORMATION-CENTRIC APPROACH TO AUTONOMOUS TRAJECTORY PLANNING UTILIZING OPTIMAL CONTROL TECHNIQUES by Michael A. Hurni September 2009...Dissertation 4. TITLE AND SUBTITLE: An Information-centric Approach to Autonomous Trajectory Planning Utilizing Optimal Control Techniques 6...then finding a time- optimal time scaling for the path subject to the actuator limits. The direct approach searches for the trajectory directly

  9. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

  10. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  11. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  12. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  13. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  14. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  15. Optimization of glycerol fed-batch fermentation in different reactor states: a variable kinetic parameter approach.

    PubMed

    Xie, Dongming; Liu, Dehua; Zhu, Haoli; Zhang, Jianan

    2002-05-01

    To optimize the fed-batch processes of glycerol fermentation in different reactor states, typical bioreactors including 500-mL shaking flask, 600-mL and 15-L airlift loop reactor, and 5-L stirred vessel were investigated. It was found that by reestimating the values of only two variable kinetic parameters associated with physical transport phenomena in a reactor, the macrokinetic model of glycerol fermentation proposed in previous work could describe well the batch processes in different reactor states. This variable kinetic parameter (VKP) approach was further applied to model-based optimization of discrete-pulse feed (DPF) strategies of both glucose and corn steep slurry for glycerol fed-batch fermentation. The experimental results showed that, compared with the feed strategies determined just by limited experimental optimization in previous work, the DPF strategies with VKPs adjusted could improve glycerol productivity at least by 27% in the scale-down and scale-up reactor states. The approach proposed appeared promising for further modeling and optimization of glycerol fermentation or the similar bioprocesses in larger scales.

  16. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  17. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  18. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  19. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  20. Frost Formation: Optimizing solutions under a finite volume approach

    NASA Astrophysics Data System (ADS)

    Bartrons, E.; Perez-Segarra, C. D.; Oliet, C.

    2016-09-01

    A three-dimensional transient formulation of the frost formation process is developed by means of a finite volume approach. Emphasis is put on the frost surface boundary condition as well as the wide range of empirical correlations related to the thermophysical and transport properties of frost. A study of the numerical solution is made, establishing the parameters that ensure grid independence. Attention is given to the algorithm, the discretised equations and the code optimization through dynamic relaxation techniques. A critical analysis of four cases is carried out by comparing solutions of several empirical models against tested experiments. As a result, a discussion on the performance of such parameters is started and a proposal of the most suitable models is presented.

  1. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  2. A Multiscale Approach to Optimal In Situ Bioremediation Design

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Liu, Y.

    2001-12-01

    The use of optimization methods for in situ bioremediation design is quite challenging because the dynamics of bioremediation require that fine spatial and temporal scales be used in simulation, which substantially increases computational effort for optimization. In this paper, we present a multiscale approach that can be used to solve substantially larger-scale problems than previously possible. The multiscale method starts from a coarse mesh and proceeds to a finer mesh when it converges. While it is on a finer mesh, it switches back to a coarser mesh to calculate derivatives. The derivatives are then interpolated back to the finer mesh to approximate the derivatives on the finer mesh. To demonstrate the method, a four-level case study with 6,500 state variables is solved in less than 9 days, compared with nearly one year that would have been required using the original single-scale model. These findings illustrate that the multiscale method will allow solution of substantially larger-scale problems than previously possible, particularly since the method also enables easy parallelization of the model in the future.

  3. Using tailored methodical approaches to achieve optimal science outcomes

    NASA Astrophysics Data System (ADS)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  4. Genetic algorithm applied to the optimization of quantum cascade lasers with second harmonic generation

    SciTech Connect

    Gajić, A.; Radovanović, J. Milanović, V.; Indjin, D.; Ikonić, Z.

    2014-02-07

    A computational model for the optimization of the second order optical nonlinearities in GaInAs/AlInAs quantum cascade laser structures is presented. The set of structure parameters that lead to improved device performance was obtained through the implementation of the Genetic Algorithm. In the following step, the linear and second harmonic generation power were calculated by self-consistently solving the system of rate equations for carriers and photons. This rate equation system included both stimulated and simultaneous double photon absorption processes that occur between the levels relevant for second harmonic generation, and material-dependent effective mass, as well as band nonparabolicity, were taken into account. The developed method is general, in the sense that it can be applied to any higher order effect, which requires the photon density equation to be included. Specifically, we have addressed the optimization of the active region of a double quantum well In{sub 0.53}Ga{sub 0.47}As/Al{sub 0.48}In{sub 0.52}As structure and presented its output characteristics.

  5. Optimal administrative scale for planning public services: a social cost model applied to Flemish hospital care.

    PubMed

    Blank, Jos L T; van Hulst, Bart

    2015-01-01

    In choosing the scale of public services, such as hospitals, both economic and public administrative considerations play important roles. The scale and the corresponding spatial distribution of public institutions have consequences for social costs, defined as the institutions' operating costs and the users' travel costs (which include the money and time costs). Insight into the relationship between scale and spatial distribution and social costs provides a practical guide for the best possible administrative planning level. This article presents a purely economic model that is suitable for deriving the optimal scale for public services. The model also reveals the corresponding optimal administrative planning level from an economic perspective. We applied this model to hospital care in Flanders for three different types of care. For its application, we examined the social costs of hospital services at different levels of administrative planning. The outcomes show that the social costs of rehabilitation in Flanders with planning at the urban level (38 areas) are 11% higher than those at the provincial level (five provinces). At the regional level (18 areas), the social costs of rehabilitation are virtually equal to those at the provincial level. For radiotherapy, there is a difference of 88% in the social costs between the urban and the provincial level. For general care, there are hardly any cost differences between the three administrative levels. Thus, purely from the perspective of social costs, rehabilitation should preferably be planned at the regional level, general services at the urban level and radiotherapy at the provincial level.

  6. Optimization of supercritical carbon dioxide extraction of silkworm pupal oil applying the response surface methodology.

    PubMed

    Wei, Zhao-Jun; Liao, Ai-Mei; Zhang, Hai-Xiang; Liu, Jian; Jiang, Shao-Tong

    2009-09-01

    Supercritical carbon dioxide extraction (SC-CO(2)) of oil from desilked silkworm pupae was performed. Response surface methodology (RSM) was applied to optimize the parameters of SC-CO(2) extraction. The effects of independent variables, including pressure, temperature, CO(2) flow rate, and extraction time, on the yield of oil were investigated. The statistical analysis showed that the pressure, extraction time, and the quadratics of pressure, extraction time, and CO(2) flow rate, as well as the interactions between pressure and temperature, and temperature and flow rate, showed significant effects on oil yield. The optimal extraction condition for oil yield within the experimental range of the variables researched was at 324.5 bar, 39.6 degrees C, 131.2 min, and 19.3 L/h. At this condition, the yield of oil was predicted to be 29.73%. The obtained silkworm pupal oil contained more than 68% total unsaturated fatty acids, and alpha-linolenic acid (ALA) accounted for 27.99% in the total oil.

  7. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    ERIC Educational Resources Information Center

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  8. A hybrid approach using chaotic dynamics and global search algorithms for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Igeta, Hideki; Hasegawa, Mikio

    Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.

  9. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2016-12-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio (S/N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  10. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    NASA Astrophysics Data System (ADS)

    Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel

    2017-02-01

    The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  11. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    PubMed

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  12. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  13. A new optimization approach for the calibration of an ultrasound probe using a 3D optical localizer.

    PubMed

    Dardenne, G; Cano, J D Gil; Hamitouche, C; Stindel, E; Roux, C

    2007-01-01

    This paper describes a fast procedure for the calibration of an ultrasound (US) probe using a 3D optical localizer. This calibration step allows us to obtain the 3D position of any point located on the 2D ultrasonic (US) image. To carry out correctly this procedure, a phantom of known geometric properties is probed and these geometries are found in the US images. A segmentation step is applied in order to obtain automatically the needed information in the US images and then, an optimization approach is performed to find the optimal calibration parameters. A new optimization method to estimate the calibration parameters for an ultrasound (US) probe is developed.

  14. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented

  15. Approach to optimal care at end of life.

    PubMed

    Nichols, K J

    2001-10-01

    At no other time in any patient's life is the team approach to care more important than at the end of life. The demands and challenges of end-of-life care (ELC) tax all physicians at some point. There is no other profession that is charged with this ultimate responsibility. No discipline in medicine is immune to the issues of end-of-life care except perhaps, ironically, pathology. This presentation addresses the issues, options, and challenges of providing optimal care at the end of life. It looks at the principles of ELC, barriers to good ELC, and what patients and families expect from ELC. Barriers to ELC include financial restrictions, inadequate care-givers, community support, legal issues, legislative issues, training needs, coordination of care, hospice care, and transitions for the patients and families. The legal aspects of physician-assisted suicide is presented as well as the approach of the American Osteopathic Association to ensure better education for physicians in the principles of ELC.

  16. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  17. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  18. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-11-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.

  19. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    PubMed Central

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-01-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy. PMID:27886244

  20. Eddy Currents applied to de-tumbling of space debris: feasibility analysis, design and optimization aspects

    NASA Astrophysics Data System (ADS)

    Ortiz Gómez, Natalia; Walker, Scott J. I.

    Existent studies on the evolution of the space debris population show that both mitigation measures and active debris removal methods are necessary in order to prevent the current population from growing. Active debris removal methods, which require contact with the target, show complications if the target is rotating at high speeds. Observed rotations go up to 50 deg/s combined with precession and nutation motions. “Natural” rotational damping in upper stages has been observed for some debris objects. This phenomenon occurs due to the eddy currents induced by the Earth’s magnetic field in the predominantly conductive materials of these man made rotating objects. The idea presented in this paper is to submit the satellite to an enhanced magnetic field in order to subdue it and damp its rotation, thus allowing for its subsequent de-orbiting phase. The braking method that is proposed has the advantage of avoiding any kind of mechanical contact with the target. A deployable structure with a magnetic coil at its end is used to induce the necessary braking torques on the target. This way, the induced magnetic field is created far away from the chaseŕs main body avoiding undesirable effects on its instruments. This paper focuses on the overall design of the system and the parameters considered are: the braking time, the power required, the mass of the deployable structure and the magnetic coil system, the size of the coil, the materials selection and distance to the target. The different equations that link all these variables together are presented. Nevertheless, these equations lead to several variables which make it possible to approach the engineering design as an optimization problem. Given that only a few variables remain, no sophisticated numerical methods are called for, and a simple graphical approach can be used to display the optimum solutions. Some parameters are open to future refinements as the optimization problem must be contemplated globally in

  1. Convergence behavior of multireference perturbation theory: Forced degeneracy and optimization partitioning applied to the beryllium atom

    NASA Astrophysics Data System (ADS)

    Finley, James P.; Chaudhuri, Rajat K.; Freed, Karl F.

    1996-07-01

    High-order multireference perturbation theory is applied to the 1S states of the beryllium atom using a reference (model) space composed of the \\|1s22s2> and the \\|1s22p2> configuration-state functions (CSF's), a system that is known to yield divergent expansions using Mo/ller-Plesset and Epstein-Nesbet partitioning methods. Computations of the eigenvalues are made through 40th order using forced degeneracy (FD) partitioning and the recently introduced optimization (OPT) partitioning. The former forces the 2s and 2p orbitals to be degenerate in zeroth order, while the latter chooses optimal zeroth-order energies of the (few) most important states. Our methodology employs simple models for understanding and suggesting remedies for unsuitable choices of reference spaces and partitioning methods. By examining a two-state model composed of only the \\|1s22p2> and \\|1s22s3s> states of the beryllium atom, it is demonstrated that the full computation with 1323 CSF's can converge only if the zeroth-order energy of the \\|1s22s3s> Rydberg state from the orthogonal space lies below the zeroth-order energy of the \\|1s22p2> CSF from the reference space. Thus convergence in this case requires a zeroth-order spectral overlap between the orthogonal and reference spaces. The FD partitioning is not capable of generating this type of spectral overlap and thus yields a divergent expansion. However, the expansion is actually asymptotically convergent, with divergent behavior not displayed until the 11th order because the \\|1s22s3s> Rydberg state is only weakly coupled with the \\|1s22p2> CSF and because these states are energetically well separated in zeroth order. The OPT partitioning chooses the correct zeroth-order energy ordering and thus yields a convergent expansion that is also very accurate in low orders compared to the exact solution within the basis.

  2. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  3. Three-dimensional electrical impedance tomography: a topology optimization approach.

    PubMed

    Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli

    2008-02-01

    Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.

  4. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management.

  5. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  6. Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact

    NASA Astrophysics Data System (ADS)

    Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo

    To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.

  7. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2016-07-20

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  8. Real-time, large scale optimization of water network systems using a subdomain approach.

    SciTech Connect

    van Bloemen Waanders, Bart Gustaaf; Biegler, Lorenz T.; Laird, Carl Damon

    2005-03-01

    Certain classes of dynamic network problems can be modeled by a set of hyperbolic partial differential equations describing behavior along network edges and a set of differential and algebraic equations describing behavior at network nodes. In this paper, we demonstrate real-time performance for optimization problems in drinking water networks. While optimization problems subject to partial differential, differential, and algebraic equations can be solved with a variety of techniques, efficient solutions are difficult for large network problems with many degrees of freedom and variable bounds. Sequential optimization strategies can be inefficient for this problem due to the high cost of computing derivatives with respect to many degrees of freedom. Simultaneous techniques can be more efficient, but are difficult because of the need to solve a large nonlinear program; a program that may be too large for current solver. This study describes a dynamic optimization formulation for estimating contaminant sources in drinking water networks, given concentration measurements at various network nodes. We achieve real-time performance by combining an efficient large-scale nonlinear programming algorithm with two problem reduction techniques. D Alembert's principle can be applied to the partial differential equations governing behavior along the network edges (distribution pipes). This allows us to approximate the time-delay relationships between network nodes, removing the need to discretize along the length of the pipes. The efficiency of this approach alone, however, is still dependent on the size of the network and does not scale indefinitely to larger network models. We further reduce the problem size with a subdomain approach and solve smaller inversion problems using a geographic window around the area of contamination. We illustrate the effectiveness of this overall approach and these reduction techniques on an actual metropolitan water network model.

  9. Optimization of Peltier current lead for applied superconducting systems with optimum combination of cryo-stages

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshio; Emoto, Masahiko; Watanabe, Hirofumi; Hamabe, Makoto; Sun, Jian; Ivanov, Yury; Yamaguchi, Satarou

    2012-06-01

    The reduction of electric power consumption of the cryo-cooler during the working conditions of applied superconducting systems is important, as superconductivity can only be stored at low temperature and the power required for the cooling determines the efficiency of the systems employed. Use of Peltier current leads (PCLs) represents one key solution to effect heat load reduction on the terminals in systems. On the other hand, the performance of cryo-coolers generally increases as the temperature increases given the higher Carnot efficiency. Therefore, combination with suitable mid-stage temperatures represents one possible approach since the thermal anchor can enhance the performance of the system by reducing the electric power consumption of the cryo-coolers. In this paper, we discuss this possibility utilizing an advanced configuration of PCL with a commercially available high temperature cooler. Over 50% enhancement of the performance is estimated.

  10. Optimization of rifamycin B fermentation in shake flasks via a machine-learning-based approach.

    PubMed

    Bapat, Prashant M; Wangikar, Pramod P

    2004-04-20

    Rifamycin B is an important polyketide antibiotic used in the treatment of tuberculosis and leprosy. We present results on medium optimization for Rifamycin B production via a barbital insensitive mutant strain of Amycolatopsis mediterranei S699. Machine-learning approaches such as Genetic algorithm (GA), Neighborhood analysis (NA) and Decision Tree technique (DT) were explored for optimizing the medium composition. Genetic algorithm was applied as a global search algorithm while NA was used for a guided local search and to develop medium predictors. The fermentation medium for Rifamycin B consisted of nine components. A large number of distinct medium compositions are possible by variation of concentration of each component. This presents a large combinatorial search space. Optimization was achieved within five generations via GA as well as NA. These five generations consisted of 178 shake-flask experiments, which is a small fraction of the search space. We detected multiple optima in the form of 11 distinct medium combinations. These medium combinations provided over 600% improvement in Rifamycin B productivity. Genetic algorithm performed better in optimizing fermentation medium as compared to NA. The Decision Tree technique revealed the media-media interactions qualitatively in the form of sets of rules for medium composition that give high as well as low productivity.

  11. Molecular tailoring approach for geometry optimization of large molecules: energy evaluation and parallelization strategies.

    PubMed

    Ganesh, V; Dongare, Rameshwar K; Balanarayan, P; Gadre, Shridhar R

    2006-09-14

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including alpha-tocopherol, taxol, gamma-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  12. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  13. On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts

    NASA Astrophysics Data System (ADS)

    Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal

    2011-05-01

    In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].

  14. A market-based optimization approach to sensor and resource management

    NASA Astrophysics Data System (ADS)

    Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.

    2006-05-01

    Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.

  15. Optimization of preparation of chitosan-coated iron oxide nanoparticles for biomedical applications by chemometrics approaches

    NASA Astrophysics Data System (ADS)

    Honary, Soheila; Ebrahimi, Pouneh; Rad, Hossein Asgari; Asgari, Mahsa

    2013-08-01

    Functionalized magnetic nanoparticles are used in several biomedical applications, such as drug delivery, magnetic cell separation, and magnetic resonance imaging. Size and surface properties of iron oxide nanoparticles are the two important factors which could dramatically affect the nanoparticle efficiency as well as their stability. In this study, the chemometrics approach was applied to optimize the coating process of iron oxide nanoparticles. To optimize the size of nanoparticles, the effect of two experimental parameters on size was investigated by means of multivariate analysis. The factors considered were chitosan molecular weight and chitosan-to-tripolyphosphate concentration ratio. The experiments were performed according to face-centered cube central composite response surface design. A second-order regression model was obtained which characterized by both descriptive and predictive abilities. The method was optimized with respect to the percent of Z average diameter's increasing after coating as response. It can be concluded that experimental design provides a suitable means of optimizing and testing the robustness of iron oxide nanoparticle coating method.

  16. Constrained nonlinear optimization approaches to color-signal separation.

    PubMed

    Chang, P R; Hsieh, T H

    1995-01-01

    Separating a color signal into illumination and surface reflectance components is a fundamental issue in color reproduction and constancy. This can be carried out by minimizing the error in the least squares (LS) fit of the product of the illumination and the surface spectral reflectance to the actual color signal. When taking in account the physical realizability constraints on the surface reflectance and illumination, the feasible solutions to the nonlinear LS problem should satisfy a number of linear inequalities. Four distinct novel optimization algorithms are presented to employ these constraints to minimize the nonlinear LS fitting error. The first approach, which is based on Ritter's superlinear convergent method (Luengerger, 1980), provides a computationally superior algorithm to find the minimum solution to the nonlinear LS error problem subject to linear inequality constraints. Unfortunately, this gradient-like algorithm may sometimes be trapped at a local minimum or become unstable when the parameters involved in the algorithm are not tuned properly. The remaining three methods are based on the stable and promising global minimizer called simulated annealing. The annealing algorithm can always find the global minimum solution with probability one, but its convergence is slow. To tackle this, a cost-effective variable-separable formulation based on the concept of Golub and Pereyra (1973) is adopted to reduce the nonlinear LS problem to be a small-scale nonlinear LS problem. The computational efficiency can be further improved when the original Boltzman generating distribution of the classical annealing is replaced by the Cauchy distribution.

  17. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  18. Applying and testing the conveniently optimized enzyme mismatch cleavage method to clinical DNA diagnosis.

    PubMed

    Niida, Yo; Kuroda, Mondo; Mitani, Yusuke; Okumura, Akiko; Yokoi, Ayano

    2012-11-01

    Establishing a simple and effective mutation screening method is one of the most compelling problems with applying genetic diagnosis to clinical use. Because there is no reliable and inexpensive screening system, amplifying by PCR and performing direct sequencing of every coding exon is the gold standard strategy even today. However, this approach is expensive and time consuming, especially when gene size or sample number is large. Previously, we developed CEL nuclease mediated heteroduplex incision with polyacrylamide gel electrophoresis and silver staining (CHIPS) as an ideal simple mutation screening system constructed with only conventional apparatuses and commercially available reagents. In this study, we evaluated the utility of CHIPS technology for genetic diagnosis in clinical practice by applying this system to screening for the COL2A1, WRN and RPS6KA3 mutations in newly diagnosed patients with Stickler syndrome (autosomal dominant inheritance), Werner syndrome (autosomal recessive inheritance) and Coffin-Lowry syndrome (X-linked inheritance), respectively. In all three genes, CHIPS detected all DNA variations including disease causative mutations within a day. Direct sequencing of all coding exons of these genes confirmed 100% sensitivity and specificity. We demonstrate high sensitivity, high cost performance and reliability of this simple system, with compatibility to all inheritance modes. Because of its low technology, CHIPS is ready to use and potentially disseminate to any laboratories in the world.

  19. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  20. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  1. A Technical and Economic Optimization Approach to Exploring Offshore Renewable Energy Development in Hawaii

    SciTech Connect

    Larson, Kyle B.; Tagestad, Jerry D.; Perkins, Casey J.; Oster, Matthew R.; Warwick, M.; Geerlofs, Simon H.

    2015-09-01

    This study was conducted with the support of the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office (WWPTO) as part of ongoing efforts to minimize key risks and reduce the cost and time associated with permitting and deploying ocean renewable energy. The focus of the study was to discuss a possible approach to exploring scenarios for ocean renewable energy development in Hawaii that attempts to optimize future development based on technical, economic, and policy criteria. The goal of the study was not to identify potentially suitable or feasible locations for development, but to discuss how such an approach may be developed for a given offshore area. Hawaii was selected for this case study due to the complex nature of the energy climate there and DOE’s ongoing involvement to support marine spatial planning for the West Coast. Primary objectives of the study included 1) discussing the political and economic context for ocean renewable energy development in Hawaii, especially with respect to how inter-island transmission may affect the future of renewable energy development in Hawaii; 2) applying a Geographic Information System (GIS) approach that has been used to assess the technical suitability of offshore renewable energy technologies in Washington, Oregon, and California, to Hawaii’s offshore environment; and 3) formulate a mathematical model for exploring scenarios for ocean renewable energy development in Hawaii that seeks to optimize technical and economic suitability within the context of Hawaii’s existing energy policy and planning.

  2. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-12-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  3. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy.

  4. Applying the Systems Approach to Curriculum Development in the Science Classroom.

    ERIC Educational Resources Information Center

    Boblick, John M.

    Described is a method by which a classroom teacher may apply the systems approach to the development of the instructional segments which he uses in his daily teaching activities. The author proposes a three-dimensional curriculum design model and discusses its main features. The basic points which characterize the application of the systems…

  5. Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

    ERIC Educational Resources Information Center

    Lin, Su-ching; Wu, Ming-sui

    2016-01-01

    This study was the first year of a two-year project which applied a program theory-driven approach to evaluating the impact of teachers' professional development interventions on students' learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of…

  6. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  7. The Contribution of Applied Social Sciences to Obesity Stigma-Related Public Health Approaches

    PubMed Central

    Bombak, Andrea E.

    2014-01-01

    Obesity is viewed as a major public health concern, and obesity stigma is pervasive. Such marginalization renders obese persons a “special population.” Weight bias arises in part due to popular sources' attribution of obesity causation to individual lifestyle factors. This may not accurately reflect the experiences of obese individuals or their perspectives on health and quality of life. A powerful role may exist for applied social scientists, such as anthropologists or sociologists, in exploring the lived and embodied experiences of this largely discredited population. This novel research may aid in public health intervention planning. Through these studies, applied social scientists could help develop a nonstigmatizing, salutogenic approach to public health that accurately reflects the health priorities of all individuals. Such an approach would call upon applied social science's strengths in investigating the mundane, problematizing the “taken for granted” and developing emic (insiders') understandings of marginalized populations. PMID:24782921

  8. A new multi criteria classification approach in a multi agent system applied to SEEG analysis

    PubMed Central

    Kinie, Abel; Ndiaye, Mamadou Lamine L.; Montois, Jean-Jacques; Jacquelet, Yann

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals. PMID:18002381

  9. Optimization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach.

    PubMed

    Fu, Zhibiao; Leighton, Julie; Cheng, Aili; Appelbaum, Edward; Aon, Juan C

    2012-07-01

    Various approaches have been applied to optimize biological product fermentation processes and define design space. In this article, we present a stepwise approach to optimize a Saccharomyces cerevisiae fermentation process through risk assessment analysis, statistical design of experiments (DoE), and multivariate Bayesian predictive approach. The critical process parameters (CPPs) were first identified through a risk assessment. The response surface for each attribute was modeled using the results from the DoE study with consideration given to interactions between CPPs. A multivariate Bayesian predictive approach was then used to identify the region of process operating conditions where all attributes met their specifications simultaneously. The model prediction was verified by twelve consistency runs where all batches achieved broth titer more than 1.53 g/L of broth and quality attributes within the expected ranges. The calculated probability was used to define the reliable operating region. To our knowledge, this is the first case study to implement the multivariate Bayesian predictive approach to the process optimization for the industrial application and its corresponding verification at two different production scales. This approach can be extended to other fermentation process optimizations and reliable operating region quantitation.

  10. a Multivariate Approach to Optimize Subseafloor Observatory Designs

    NASA Astrophysics Data System (ADS)

    Lado Insua, T.; Moran, K.; Kulin, I.; Farrington, S.; Newman, J. B.; Morgan, S.

    2012-12-01

    Long-term monitoring of the subseafloor has become a more common practice in the last few decades. Systems such as the Circulation Obviation Retrofit Kit (CORK) have been used since the 1970s to provide the scientific community with time series measurements of geophysical properties below the seafloor and in the latest versions with pore water sampling over time. The Simple Cabled Instrument for Measuring Parameters In-Situ (SCIMPI) is a new observatory instrument designed to study dynamic processes in the sub-seabed. SCIMPI makes time series measurements of temperature, pressure and electrical resistivity at a series of depths in the sub-seafloor, tailored for site-specific scientific objectives. SCIMPI's modular design enables this type of site-specific configuration, based on the study goals, combined with the sub-seafloor characteristics. The instrument is designed to take measurements in dynamic environments. After four years in development, SCIMPI is scheduled for first deployment on the Cascadia Margin within the NEPTUNE Canada observatory network. SCIMPI's flexible modular design simplifies the deployment and reduces the cost of measurements of physical properties. SCIMPI is expected to expand subseafloor observations into softer sediments and multiple depth intervals. In any observation system, the locations and number of sensors is a compromise between scientific objectives and cost. The subseafloor sensor positions within an observatory borehole have been determined in the past by identifying the major lithologies or major flux areas, based on individual analysis of the physical properties and logging measurements of the site. Here we present a multivariate approach for identifying the most significant depth intervals to instrument for long-term subseafloor observatories. Where borehole data are available (wireline logging, logging while drilling, physical properties and chemistry measurements), this approach will optimize the locations using an unbiased

  11. Genetic-Algorithm-based Light-Curve Optimization Applied to Observations of the W Ursae Majoris Star BH Cassiopeiae

    NASA Astrophysics Data System (ADS)

    Metcalfe, Travis S.

    1999-05-01

    I have developed a procedure utilizing a genetic-algorithm (GA) based optimization scheme to fit the observed light curves of an eclipsing binary star with a model produced by the Wilson-Devinney (W-D) code. The principal advantages of this approach are the global search capability and the objectivity of the final result. Although this method can be more efficient than some other comparably global search techniques, the computational requirements of the code are still considerable. I have applied this fitting procedure to my observations of the W UMa type eclipsing binary BH Cassiopeiae. An analysis of V-band CCD data obtained in 1994-1995 from Steward Observatory and U- and B-band photoelectric data obtained in 1996 from McDonald Observatory provided three complete light curves to constrain the fit. In addition, radial velocity curves obtained in 1997 from McDonald Observatory provided a direct measurement of the system mass ratio to restrict the search. The results of the GA-based fit are in excellent agreement with the final orbital solution obtained with the standard differential corrections procedure in the W-D code.

  12. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  13. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  14. Non-Linear Optimization Applied to Angle-of-Arrival Satellite-Based Geolocation with Correlated Measurements

    DTIC Science & Technology

    2015-03-01

    NON-LINEAR OPTIMIZATION APPLIED TO ANGLE-OF-ARRIVAL SATELLITE -BASED GEOLOCATION WITH CORRELATED MEASUREMENTS THESIS Joshua S. Sprang, 2d Lt, USAF...APPLIED TO ANGLE-OF-ARRIVAL SATELLITE -BASED GEOLOCATION WITH CORRELATED MEASUREMENTS THESIS Presented to the Faculty Department of Electrical and Computer...ARRIVAL SATELLITE -BASED GEOLOCATION WITH CORRELATED MEASUREMENTS THESIS Joshua S. Sprang, B.S.E.E., B.S.Cp.E 2d Lt, USAF Committee Membership: Dr. A

  15. Taguchi Method Applied in Optimization of Shipley 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, Allan; Wiberg, Dean V.; Blosiu, Julian

    1997-01-01

    Taguchi Methods of Robust Design Presents a way to optimize output process performance through organized experiments, by using orthogonal arrays for the evaluation of the process controlleable parameters.

  16. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  17. Simultaneous optimization by neuro-genetic approach for analysis of plant materials by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Nunes, Lidiane Cristina; da Silva, Gilmare Antônia; Trevizan, Lilian Cristina; Santos Júnior, Dario; Poppi, Ronei Jesus; Krug, Francisco José

    2009-06-01

    A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macro-nutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic. A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 µs integration time gate, 1.1 µs delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem.

  18. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  19. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  20. Geometric approach to optimal nonequilibrium control: Minimizing dissipation in nanomagnetic spin systems

    NASA Astrophysics Data System (ADS)

    Rotskoff, Grant M.; Crooks, Gavin E.; Vanden-Eijnden, Eric

    2017-01-01

    Optimal control of nanomagnets has become an urgent problem for the field of spintronics as technological tools approach thermodynamically determined limits of efficiency. In complex, fluctuating systems, such as nanomagnetic bits, finding optimal protocols is challenging, requiring detailed information about the dynamical fluctuations of the controlled system. We provide a physically transparent derivation of a metric tensor for which the length of a protocol is proportional to its dissipation. This perspective simplifies nonequilibrium optimization problems by recasting them in a geometric language. We then describe a numerical method, an instance of geometric minimum action methods, that enables computation of geodesics even when the number of control parameters is large. We apply these methods to two models of nanomagnetic bits: a Landau-Lifshitz-Gilbert description of a single magnetic spin controlled by two orthogonal magnetic fields, and a two-dimensional Ising model in which the field is spatially controlled. These calculations reveal nontrivial protocols for bit erasure and reversal, providing important, experimentally testable predictions for ultra-low-power computing.

  1. Optimization of Antioxidant Potential of Penicillium granulatum Bainier by Statistical Approaches

    PubMed Central

    Chandra, Priyanka; Arora, Daljit Singh

    2012-01-01

    A three-step optimization strategy which includes one-factor-at-a-time classical method and different statistical approaches (Plackett-Burman design and response surface methodology) that were applied to optimize the antioxidant potential of Penicillium granulatum. Antioxidant activity was assayed by different procedures and compared with total phenolic content. Primarily, different carbon and nitrogen sources were screened by classical methods, which revealed sucrose and NaNO3 to be the most suitable. In second step, Plackett-Burman design also supported sucrose and NaNO3 to be the most significant. In third step, response surface analysis showed 4.5% sucrose, 0.1% NaNO3, and incubation temperature of 25°C to be the optimal conditions. Under these conditions, the antioxidant potential assayed through different procedures was 78.2%, 70.1%, and 78.9% scavenging effect for DPPH radical, ferrous ion, and nitric oxide ion, respectively. The reducing power showed an absorbance of 1.6 with 68.5% activity for FRAP assay. PMID:23724323

  2. Modeling and multi-response optimization of pervaporation of organic aqueous solutions using desirability function approach.

    PubMed

    Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A

    2009-08-15

    The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.

  3. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.

  4. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  5. On the use of computation optimization opportunities in computer technologies for applied and computational mathematics problems with prescribed quality characteristics

    NASA Astrophysics Data System (ADS)

    Babich, M. D.; Zadiraka, V. K.; Lyudvichenko, V. A.; Sergienko, I. V.

    2010-12-01

    The use of various opportunities for computation optimization in computer technologies for applied and computational mathematics problems with prescribed quality characteristics is investigated. More precisely, the choice and determination of computational resources and methods of their efficient use for finding an approximate solution of problems up to prescribed accuracy in a limited amount of processor time are investigated.

  6. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  7. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  8. Making Big Data, Safe Data: A Test Optimization Approach

    DTIC Science & Technology

    2016-06-15

    Business & Public Policy Naval Postgraduate School Executive Summary The following report outlines a procedure and algorithm to optimize the...the Network Data ............................................................................. 11 Step 6: Run Algorithm ... Algorithm ....................................................................................................... 16 Step 7: Perform Suggested Tests

  9. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes.

  10. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  11. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  12. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  13. Efficient global optimization applied to wind tunnel evaluation-based optimization for improvement of flow control by plasma actuators

    NASA Astrophysics Data System (ADS)

    Kanazaki, Masahiro; Matsuno, Takashi; Maeda, Kengo; Kawazoe, Hiromitsu

    2015-09-01

    A kriging-based genetic algorithm called efficient global optimization (EGO) was employed to optimize the parameters for the operating conditions of plasma actuators. The aerodynamic performance was evaluated by wind tunnel testing to overcome the disadvantages of time-consuming numerical simulations. The proposed system was used on two design problems to design the power supply for a plasma actuator. The first case was the drag minimization problem around a semicircular cylinder. In this case, the inhibitory effect of flow separation was also observed. The second case was the lift maximization problem around a circular cylinder. This case was similar to the aerofoil design, because the circular cylinder has potential to work as an aerofoil owing to the control of the flow circulation by the plasma actuators with four design parameters. In this case, applicability to the multi-variant design problem was also investigated. Based on these results, optimum designs and global design information were obtained while drastically reducing the number of experiments required compared to a full factorial experiment.

  14. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  15. An approach for evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems

    NASA Astrophysics Data System (ADS)

    Nakae, Nobuo; Ozawa, Takayuki; Ohta, Hirokazu; Ogata, Takanari; Sekimoto, Hiroshi

    2014-03-01

    One of the important issues in the study of Innovative Nuclear Energy Systems is evaluating the integrity of fuel applied in Innovative Nuclear Energy Systems. An approach for evaluating the integrity of the fuel is discussed here based on the procedure currently used in the integrity evaluation of fast reactor fuel. The fuel failure modes determining fuel life time were reviewed and fuel integrity was analyzed and compared with the failure criteria.

  16. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of

  17. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  18. On the combination of c- and D-optimal designs: General approaches and applications in dose-response studies.

    PubMed

    Holland-Letz, Tim

    2017-03-01

    Dose-response modeling in areas such as toxicology is often conducted using a parametric approach. While estimation of parameters is usually one of the goals, often the main aim of the study is the estimation of quantities derived from the parameters, such as the ED50 dose. From the view of statistical optimal design theory such an objective corresponds to a c-optimal design criterion. Unfortunately, c-optimal designs often create practical problems, and furthermore commonly do not allow actual estimation of the parameters. It is therefore useful to consider alternative designs which show good c-performance, while still being applicable in practice and allowing reasonably good general parameter estimation. In effect, using optimal design terminology this means that a reasonable performance regarding the D-criterion is expected as well. In this article, we propose several approaches to the task of combining c- and D-efficient designs, such as using mixed information functions or setting minimum requirements regarding either c- or D-efficiency, and show how to algorithmically determine optimal designs in each case. We apply all approaches to a standard situation from toxicology, and obtain a much better balance between c- and D-performance. Next, we investigate how to adapt the designs to different parameter values. Finally, we show that the methodology used here is not just limited to the combination of c- and D-designs, but can also be used to handle more general constraint situations such as limits on the cost of an experiment.

  19. An iterative approach to optimize change classification in SAR time series data

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2016-10-01

    The detection of changes using remote sensing imagery has become a broad field of research with many approaches for many different applications. Besides the simple detection of changes between at least two images acquired at different times, analyses which aim on the change type or category are at least equally important. In this study, an approach for a semi-automatic classification of change segments is presented. A sparse dataset is considered to ensure the fast and simple applicability for practical issues. The dataset is given by 15 high resolution (HR) TerraSAR-X (TSX) amplitude images acquired over a time period of one year (11/2013 to 11/2014). The scenery contains the airport of Stuttgart (GER) and its surroundings, including urban, rural, and suburban areas. Time series imagery offers the advantage of analyzing the change frequency of selected areas. In this study, the focus is set on the analysis of small-sized high frequently changing regions like parking areas, construction sites and collecting points consisting of high activity (HA) change objects. For each HA change object, suitable features are extracted and a k-means clustering is applied as the categorization step. Resulting clusters are finally compared to a previously introduced knowledge-based class catalogue, which is modified until an optimal class description results. In other words, the subjective understanding of the scenery semantics is optimized by the data given reality. Doing so, an even sparsely dataset containing only amplitude imagery can be evaluated without requiring comprehensive training datasets. Falsely defined classes might be rejected. Furthermore, classes which were defined too coarsely might be divided into sub-classes. Consequently, classes which were initially defined too narrowly might be merged. An optimal classification results when the combination of previously defined key indicators (e.g., number of clusters per class) reaches an optimum.

  20. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  1. Computational approach to quantum encoder design for purity optimization

    SciTech Connect

    Yamamoto, Naoki; Fazel, Maryam

    2007-07-15

    In this paper, we address the problem of designing a quantum encoder that maximizes the minimum output purity of a given decohering channel, where the minimum is taken over all possible pure inputs. This problem is cast as a max-min optimization problem with a rank constraint on an appropriately defined matrix variable. The problem is computationally very hard because it is nonconvex with respect to both the objective function (output purity) and the rank constraint. Despite this difficulty, we provide a tractable computational algorithm that produces the exact optimal solution for codespace of dimension 2. Moreover, this algorithm is easily extended to cover the general class of codespaces, in which case the solution is suboptimal in the sense that the suboptimized output purity serves as a lower bound of the exact optimal purity. The algorithm consists of a sequence of semidefinite programmings and can be performed easily. Two typical quantum error channels are investigated to illustrate the effectiveness of our method.

  2. A correlation consistency based multivariate alarm thresholds optimization approach.

    PubMed

    Gao, Huihui; Liu, Feifei; Zhu, Qunxiong

    2016-11-01

    Different alarm thresholds could generate different alarm data, resulting in different correlations. A new multivariate alarm thresholds optimization methodology based on the correlation consistency between process data and alarm data is proposed in this paper. The interpretative structural modeling is adopted to select the key variables. For the key variables, the correlation coefficients of process data are calculated by the Pearson correlation analysis, while the correlation coefficients of alarm data are calculated by kernel density estimation. To ensure the correlation consistency, the objective function is established as the sum of the absolute differences between these two types of correlations. The optimal thresholds are obtained using particle swarm optimization algorithm. Case study of Tennessee Eastman process is given to demonstrate the effectiveness of proposed method.

  3. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum

  4. Assessing switchability for biosimilar products: modelling approaches applied to children's growth.

    PubMed

    Belleli, Rossella; Fisch, Roland; Renard, Didier; Woehling, Heike; Gsteiger, Sandro

    2015-01-01

    The present paper describes two statistical modelling approaches that have been developed to demonstrate switchability from the original recombinant human growth hormone (rhGH) formulation (Genotropin(®) ) to a biosimilar product (Omnitrope(®) ) in children suffering from growth hormone deficiency. Demonstrating switchability between rhGH products is challenging because the process of growth varies with the age of the child and across children. The first modelling approach aims at predicting individual height measured at several time-points after switching to the biosimilar. The second modelling approach provides an estimate of the deviation from the overall growth rate after switching to the biosimilar, which can be regarded as an estimate of switchability. The results after applying these approaches to data from a randomized clinical trial are presented. The accuracy and precision of the predictions made using the first approach and the small deviation from switchability estimated with the second approach provide sufficient evidence to conclude that switching from Genotropin(®) to Omnitrope(®) has a very small effect on growth, which is neither statistically significant nor clinically relevant.

  5. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L(-1) for cadaverine or 497 μg L(-1) for histamine in solvent and 0.07 mg kg(-1) and 14.81 mg kg(-1) in fish (probability of false positive equal to 0.05), respectively.

  6. Behavioral Language Interventions for Children with Autism: Comparing Applied Verbal Behavior and Naturalistic Teaching Approaches

    PubMed Central

    LeBlanc, Linda A; Esch, John; Sidener, Tina M; Firth, Amanda M

    2006-01-01

    Several important behavioral intervention models have been developed for teaching language to children with autism and two are compared in this paper. Professionals adhering to Skinner's conceptualization of language refer to their curriculum and intervention programming as applied verbal behavior (AVB). Those primarily focused on developing and using strategies embedded in natural settings that promote generalization refer to their interventions as naturalistic teaching approaches (NTAs). The purpose of this paper is to describe each approach and discuss similarities and differences in terms of relevant dimensions of stimulus control. The discussion includes potential barriers to translation of terminology between the two approaches that we feel can be overcome to allow better communication and collaboration between the two communities. Common naturalistic teaching procedures are described and a Skinnerian conceptualization of these learning events is provided. PMID:22477343

  7. Behavioral language interventions for children with autism: comparing applied verbal behavior and naturalistic teaching approaches.

    PubMed

    Leblanc, Linda A; Esch, John; Sidener, Tina M; Firth, Amanda M

    2006-01-01

    Several important behavioral intervention models have been developed for teaching language to children with autism and two are compared in this paper. Professionals adhering to Skinner's conceptualization of language refer to their curriculum and intervention programming as applied verbal behavior (AVB). Those primarily focused on developing and using strategies embedded in natural settings that promote generalization refer to their interventions as naturalistic teaching approaches (NTAs). The purpose of this paper is to describe each approach and discuss similarities and differences in terms of relevant dimensions of stimulus control. The discussion includes potential barriers to translation of terminology between the two approaches that we feel can be overcome to allow better communication and collaboration between the two communities. Common naturalistic teaching procedures are described and a Skinnerian conceptualization of these learning events is provided.

  8. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    SciTech Connect

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-08-15

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p<0.02) and achieved AUC=0.85{+-}0.01. The DF-P surpassed the other classifiers in terms of pAUC (p<0.01) and reached pAUC=0.38{+-}0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p<0.04) and achieved AUC=0.94{+-}0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57{+-}0.07 to 0.67{+-}0.05, p>0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

  9. Electromagnetic integral equation approach based on contraction operator and solution optimization in Krylov subspace

    NASA Astrophysics Data System (ADS)

    Singer, B. Sh.

    2008-12-01

    The paper presents a new code for modelling electromagnetic fields in complicated 3-D environments and provides examples of the code application. The code is based on an integral equation (IE) for the scattered electromagnetic field, presented in the form used by the Modified Iterative Dissipative Method (MIDM). This IE possesses contraction properties that allow it to be solved iteratively. As a result, for an arbitrary earth model and any source of the electromagnetic field, the sequence of approximations converges to the solution at any frequency. The system of linear equations that represents a finite-dimensional counterpart of the continuous IE is derived using a projection definition of the system matrix. According to this definition, the matrix is calculated by integrating the Green's function over the `source' and `receiver' cells of the numerical grid. Such a system preserves contraction properties of the continuous equation and can be solved using the same iterative technique. The condition number of the system matrix and, therefore, the convergence rate depends only on the physical properties of the model under consideration. In particular, these parameters remain independent of the numerical grid used for numerical simulation. Applied to the system of linear equations, the iterative perturbation approach generates a sequence of approximations, converging to the solution. The number of iterations is significantly reduced by finding the best possible approximant inside the Krylov subspace, which spans either all accumulated iterates or, if it is necessary to save the memory, only a limited number of the latest iterates. Optimization significantly reduces the number of iterates and weakens its dependence on the lateral contrast of the model. Unlike more traditional conjugate gradient approaches, the iterations are terminated when the approximate solution reaches the requested relative accuracy. The number of the required iterates, which for simple

  10. Bioassay case study applying the maximin D-optimal design algorithm to the four-parameter logistic model.

    PubMed

    Coffey, Todd

    2015-01-01

    Cell-based potency assays play an important role in the characterization of biopharmaceuticals but they can be challenging to develop in part because of greater inherent variability than other analytical methods. Our objective is to select concentrations on a dose-response curve that will enhance assay robustness. We apply the maximin D-optimal design concept to the four-parameter logistic (4 PL) model and then derive and compute the maximin D-optimal design for a challenging bioassay using curves representative of assay variation. The selected concentration points from this 'best worst case' design adequately fit a variety of 4 PL shapes and demonstrate improved robustness.

  11. Applying genetic algorithms to space optimization decision of farmland bio-energy intensive utilization

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Xia; Zhuo, Li; Tao, Haiyan; Xia, Lihua

    2008-10-01

    The development of bio-energy intensive utilization of farmland is to solve China's emerging issues related to energy and environment in an important way. Given the spatial distribution of bio-energy is scattered, not continuous, the intensive utilization of farmland bio-energy is different from that of the traditional energy, i.e. coal, oil, natural gas, etc.. The estimation of biomass, the spatial distribution and the space optimization study are the key for practical applications to develop bio-energy intensive utilization. Based on a case study conducted in Guangdong province, China, this paper provides a framework that estimates available biomass and analyzes its distribution pattern in the established NPP model quickly; it also builds the primary collection ranges by Thiessen polygon in different scales. The application of Genetic Algorithms (GA) to the optimization and space decision of bio-energy intensive utilization is one of the key deliveries. The result shows that GA and GIS integration model for resolving domain-point supply and field demand has obvious advantages. A key finding presents that the model simulation results have enormous impact by the MUAP. When Thiessen polygon scale with 10 KM proximal threshold is established as the primary collecting scope of bioenergy, the fitness value can be maximized in the optimized process. In short, the optimized model can provide an effective solution to farmland bio-energy spatial optimization.

  12. An Optimal Design Approach to Criterion-Referenced Computerized Testing

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2003-01-01

    A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…

  13. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  14. An Optimal Foraging Approach to Information Seeking and Use.

    ERIC Educational Resources Information Center

    Sandstrom, Pamela Effrein

    1994-01-01

    Explores optimal foraging theory, derived from evolutionary ecology, for its potential to clarify and operationalize studies of scholarly communication. Metaphorical parallels between subsistence foragers and scholarly information seekers are drawn. Hypotheses to test the models are recommended. The place of ethnographic and bibliometric…

  15. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  16. A procedure for specimen optimization applied to material testing in plasticity with the virtual fields method

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Badaloni, Michele; Lava, Pascal; Debruyne, Dimitri; Pierron, Fabrice

    2016-10-01

    The paper presents a numerical procedure to design an optimal geometry for specimens that will be used to identify the hardening behaviour of sheet metals with the virtual fields method (VFM). The procedure relies on a test simulator able to generate synthetic images similar to the ones obtained during an actual test. Digital image correlation (DIC) was used to achieve the strain field, then the constitutive parameters are identified with the VFM and compared with the reference ones. A parametric study was conducted on different types of notched specimens and an optimal configuration was identified eventually.

  17. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  18. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  19. Using genomic prediction to characterize environments and optimize prediction accuracy in applied breeding data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Simulation and empirical studies of genomic selection (GS) show accuracies sufficient to generate rapid annual genetic gains. It also shifts the focus from the evaluation of lines to the evaluation of alleles. Consequently, new methods should be developed to optimize the use of large historic multi-...

  20. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    NASA Astrophysics Data System (ADS)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  1. An Implementation of a Mathematical Programming Approach to Optimal Enrollments. AIR 2001 Annual Forum Paper.

    ERIC Educational Resources Information Center

    DePaolo, Concetta A.

    This paper explores the application of a mathematical optimization model to the problem of optimal enrollments. The general model, which can be applied to any institution, seeks to enroll the "best" class of students (as defined by the institution) subject to constraints imposed on the institution (e.g., capacity, quality). Topics…

  2. Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1987-01-01

    A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.

  3. A thermodynamic approach to the affinity optimization of drug candidates.

    PubMed

    Freire, Ernesto

    2009-11-01

    High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.

  4. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  5. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  6. A free boundary approach to shape optimization problems.

    PubMed

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint.

  7. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  8. Coordinated Target Tracking via a Hybrid Optimization Approach

    PubMed Central

    Wang, Yin; Cao, Yan

    2017-01-01

    Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV) in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects’ motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO). The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions. PMID:28264425

  9. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  10. Coordinated Target Tracking via a Hybrid Optimization Approach.

    PubMed

    Wang, Yin; Cao, Yan

    2017-02-27

    Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV) in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects' motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO). The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions.

  11. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  12. Therapy for Duchenne muscular dystrophy: renewed optimism from genetic approaches.

    PubMed

    Fairclough, Rebecca J; Wood, Matthew J; Davies, Kay E

    2013-06-01

    Duchenne muscular dystrophy (DMD) is a devastating progressive disease for which there is currently no effective treatment except palliative therapy. There are several promising genetic approaches, including viral delivery of the missing dystrophin gene, read-through of translation stop codons, exon skipping to restore the reading frame and increased expression of the compensatory utrophin gene. The lessons learned from these approaches will be applicable to many other disorders.

  13. Quality by design approach for optimizing the formulation and physical properties of extemporaneously prepared orodispersible films.

    PubMed

    Visser, J Carolina; Dohmen, Willem M C; Hinrichs, Wouter L J; Breitkreutz, Jörg; Frijlink, Henderik W; Woerdenbag, Herman J

    2015-05-15

    The quality by design (QbD) approach was applied for optimizing the formulation of extemporaneously prepared orodispersible films (ODFs) using Design-Expert® Software. The starting formulation was based on earlier experiments and contained the film forming agents hypromellose and carbomer 974P and the plasticizer glycerol (Visser et al., 2015). Trometamol and disodium EDTA were added to stabilize the solution. To optimize this formulation a quality target product profile was established in which critical quality attributes (CQAs) such as mechanical properties and disintegration time were defined and quantified. As critical process parameters (CPP) that were evaluated for their effect on the CQAs the percentage of hypromellose and the percentage of glycerol as well as the drying time were chosen. Response surface methodology (RMS) was used to evaluate the effects of the CPPs on the CQAs of the final product. The main factor affecting tensile strength and Young's modulus was the percentage of glycerol. Elongation at break was mainly influenced by the drying temperature. Disintegration time was found to be sensitive to the percentage of hypromellose. From the results a design space could be created. As long as the formulation and process variables remain within this design space, a product is obtained with desired characteristics and that meets all set quality requirements.

  14. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    PubMed

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  15. [Optimization approach to inverse problems in near-infrared optical tomography].

    PubMed

    Li, Weitao; Wang, Huinan; Qian, Zhiyu

    2008-04-01

    In this paper, we introduce an optimization approach to the inverse model of near-infrared optical tomography (NIR OT), which can reconstruct the optical properties, namely the absorption and scattering coefficients of thick tissue such as brain and breast tissues. A modeling and simulation tool, named Femlab and based on finite element methods, has been tested wherein the forward models are based on the diffusion equation. Then the inverse model is soved; this is regarded as an optimization approach, including the tests on difference between the measured data and the predicted data, and the optimization methods of optical properties. The algorithms used for optimization are multi-species Genetic Algorithms based on multi-encoding. At last, the whole strategy for the Femlab and optimization approach is given. The strategy is proved to be sufficient by the simulation results.

  16. Learning About Dying and Living: An Applied Approach to End-of-Life Communication.

    PubMed

    Pagano, Michael P

    2016-08-01

    The purpose of this article is to expand on prior research in end-of-life communication and death and dying communication apprehension, by developing a unique course that utilizes a hospice setting and an applied, service-learning approach. Therefore, this essay describes and discusses both students' and my experiences over a 7-year period from 2008 through 2014. The courses taught during this time frame provided an opportunity to analyze students' responses, experiences, and discoveries across semesters/years and cocultures. This unique, 3-credit, 14-week, service-learning, end-of-life communication course was developed to provide an opportunity for students to learn the theories related to this field of study and to apply that knowledge through volunteer experiences via interactions with dying patients and their families. The 7 years of author's notes, plus the 91 students' electronically submitted three reflection essays each (273 total documents) across four courses/years, served as the data for this study. According to the students, verbally in class discussions and in numerous writing assignments, this course helped lower their death and dying communication apprehension and increased their willingness to interact with hospice patients and their families. Furthermore, the students' final research papers clearly demonstrated how utilizing a service-learning approach allowed them to apply classroom learnings and interactions with dying patients and their families at the hospice, to their analyses of end-of-life communication theories and behaviors. The results of these classes suggest that other, difficult topic courses (e.g., domestic violence, addiction, etc.) might benefit from a similar pedagogical approach.

  17. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  18. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  19. Successful aging at work: an applied study of selection, optimization, and compensation through impression management.

    PubMed

    Abraham, J D; Hansson, R O

    1995-03-01

    Although many abilities basic to human performance appear to decrease with age, research has shown that job performance does not generally show comparable declines. Baltes and Baltes (1990) have proposed a model of successful aging involving Selection, Optimization, and Compensation (SOC), that may help explain how individuals maintain important competencies despite age-related losses. In the present study, involving a total of 224 working adults ranging in age from 40 to 69 years, occupational measures of Selection, Optimization, and Compensation through impression management (Compensation-IM) were developed. The three measures were factorially distinct and reliable (Cronbach's alpha > .80). Moderated regression analyses indicated that: (1) the relationship between Selection and self-reported ability/performance maintenance increased with age (p < or = .05); and (2) the relationship between both Optimization and Compensation-IM and goal attainment (i.e., importance-weighted ability/performance maintenance) increased with age (p < or = .05). Results suggest that the SOC model of successful aging may be useful in explaining how older workers can maintain important job competencies. Correlational evidence also suggests, however, that characteristics of the job, workplace, and individual may mediate the initiation and effectiveness of SOC behaviors.

  20. Optimal Runge-Kutta schemes for discontinuous Galerkin space discretizations applied to wave propagation problems

    NASA Astrophysics Data System (ADS)

    Toulorge, T.; Desmet, W.

    2012-02-01

    We study the performance of methods of lines combining discontinuous Galerkin spatial discretizations and explicit Runge-Kutta time integrators, with the aim of deriving optimal Runge-Kutta schemes for wave propagation applications. We review relevant Runge-Kutta methods from literature, and consider schemes of order q from 3 to 4, and number of stages up to q + 4, for optimization. From a user point of view, the problem of the computational efficiency involves the choice of the best combination of mesh and numerical method; two scenarios are defined. In the first one, the element size is totally free, and a 8-stage, fourth-order Runge-Kutta scheme is found to minimize a cost measure depending on both accuracy and stability. In the second one, the elements are assumed to be constrained to such a small size by geometrical features of the computational domain, that accuracy is disregarded. We then derive one 7-stage, third-order scheme and one 8-stage, fourth-order scheme that maximize the stability limit. The performance of the three new schemes is thoroughly analyzed, and the benefits are illustrated with two examples. For each of these Runge-Kutta methods, we provide the coefficients for a 2 N-storage implementation, along with the information needed by the user to employ them optimally.

  1. A numerical optimization approach to generate smoothing spherical splines

    NASA Astrophysics Data System (ADS)

    Machado, L.; Monteiro, M. Teresa T.

    2017-01-01

    Approximating data in curved spaces is a common procedure that is extremely required by modern applications arising, for instance, in aerospace and robotics industries. Here, we are particularly interested in finding smoothing cubic splines that best fit given data in the Euclidean sphere. To achieve this aim, a least squares optimization problem based on the minimization of a certain cost functional is formulated. To solve the problem a numerical algorithm is implemented using several routines from MATLAB toolboxes. The proposed algorithm is shown to be easy to implement, very accurate and precise for spherical data chosen randomly.

  2. On the local optimal solutions of metabolic regulatory networks using information guided genetic algorithm approach and clustering analysis.

    PubMed

    Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming

    2007-08-31

    Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.

  3. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-04

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective.

  4. Optimized setup for integral refractive index direct determination applying digital holographic microscopy by reflection and transmission

    NASA Astrophysics Data System (ADS)

    Frómeta, M.; Moreno, G.; Ricardo, J.; Arias, Y.; Muramatsu, M.; Gomes, L. F.; Palácios, G.; Palácios, F.; Velázquez, H.; Valin, J. L.; Ramirez Q, L.

    2017-03-01

    In this paper the integral refractive index of a microscopic sample was directly measured by applying Digital Holographic Microscopy (DHM) capturing transmission and reflection holograms simultaneously, of the same sample's region, using Mach-Zehnder and Michelson micro interferometers for transmission and reflection holograms capture and modeling the 3D sample in a medium of known refractive index nm. The system was calibrated using standard polystyrene sphere immersed in water with known diameter and refractive index, and the method was applied for erythrocyte integral refractive index determination. The results are in accordance with predicted, the measurements error of the order of ± 0.005 in absolute values.

  5. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    PubMed

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.

  6. Design and optimization of bilayered tablet of Hydrochlorothiazide using the Quality-by-Design approach

    PubMed Central

    Dholariya, Yatin N; Bansod, Yogesh B; Vora, Rahul M; Mittal, Sandeep S; Shirsat, Ajinath Eknath; Bhingare, Chandrashekhar L

    2014-01-01

    Aim: The aim of the present study is to develop an optimize bilayered tablet using Hydrochlorothiazide (HCTZ) as a model drug candidate using quality by design (QbD) approach. Introduction and Method: The bilayered tablet gives biphasic drug release through loading dose; prepared using croscarmellose sodium a superdisintegrant and maintenance dose using several viscosity grades of hydrophilic polymers. The fundamental principle of QbD is to demonstrate understanding and control of pharmaceutical processes so as to deliver high quality pharmaceutical products with wide opportunities for continuous improvement. Risk assessment was carried out and subsequently 22 factorial designs in duplicate was selected to carry out design of experimentation (DOE) for evaluating the interactions and effects of the design factors on critical quality attribute. The design space was obtained by applying DOE and multivariate analysis, so as to ensure desired disintegration time (DT) and drug release is achieved. Bilayered tablet were evaluated for hardness, thickness, friability, drug content uniformity and in vitro drug dissolution. Result: Optimized formulation obtained from the design space exhibits DT of around 70 s, while DR T95% (time required to release 95% of the drug) was about 720 min. Kinetic studies of formulations revealed that erosion is the predominant mechanism for drug release. Conclusion: From the obtained results; it was concluded that independent variables have a significant effect over the dependent responses, which can be deduced from half normal plots, pareto charts and surface response graphs. The predicted values matched well with the experimental values and the result demonstrates the feasibility of the design model in the development and optimization of HCTZ bilayered tablet. PMID:25006554

  7. Evaluation of Jumping and Creeping Regularization Approaches Applied to 3D Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Liu, M.; Ramachandran, K.

    2011-12-01

    are evaluated on a synthetic 3-D true model obtained from a large scale experiment. The evaluation is performed for jumping and creeping approaches for various levels of smoothing constraints, and initial models. The final models are compared against the true models to compute residual distance between the models. Horizontal and vertical roughness in the final models are computed and compared with the true model roughness. Correlation between the true and final models is computed to evaluate the similarities of spatial patterns in the models. The study is also used to show that average 1-D models derived from the final models are very close, indicating that this will be an optimal approach to construct 1-D starting models.

  8. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  9. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    NASA Astrophysics Data System (ADS)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  10. Optimization-based Approach to Cross-layer Resource Management in Wireless Networked Control Systems

    DTIC Science & Technology

    2013-05-01

    distribution is unlimited. Optimization-based approach to cross-layer resource management in Wireless networked control systems The views, opinions...Box 12211 Research Triangle Park, NC 27709-2211 cross-layer resource management , sampling rate adaptation, networked control system REPORT...7749 2 ABSTRACT Optimization-based approach to cross-layer resource management in Wireless networked control systems Report Title Wireless Networked

  11. A Computational Approach for Near-Optimal Path Planning and Guidance for Systems with Nonholonomic Contraints

    DTIC Science & Technology

    2010-04-14

    novel methods for discretization based on Legendre-Gauss and Legendre-Gauss- Radau quadrature points. Using this approach, the finite-dimensional...Gauss- Radau quadrature points. Using this approach, the finite-dimensional approximation is kept low-dimensional, potentially enabling near real...Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method,” Computational Optimization and

  12. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  13. New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach.

    PubMed

    Pallagi, Edina; Karimi, Keyhaneh; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2016-09-10

    The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent. The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper. Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery.

  14. Random matrix theory for portfolio optimization: a stability approach

    NASA Astrophysics Data System (ADS)

    Sharifi, S.; Crane, M.; Shamaie, A.; Ruskin, H.

    2004-04-01

    We apply random matrix theory (RMT) to an empirically measured financial correlation matrix, C, and show that this matrix contains a large amount of noise. In order to determine the sensitivity of the spectral properties of a random matrix to noise, we simulate a set of data and add different volumes of random noise. Having ascertained that the eigenspectrum is independent of the standard deviation of added noise, we use RMT to determine the noise percentage in a correlation matrix based on real data from S&P500. Eigenvalue and eigenvector analyses are applied and the experimental results for each of them are presented to identify qualitatively and quantitatively different spectral properties of the empirical correlation matrix to a random counterpart. Finally, we attempt to separate the noisy part from the non-noisy part of C. We apply an existing technique to cleaning C and then discuss its associated problems. We propose a technique of filtering C that has many advantages, from the stability point of view, over the existing method of cleaning.

  15. Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories

    NASA Astrophysics Data System (ADS)

    Way, David Wesley

    2001-10-01

    Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on

  16. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    PubMed

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology.

  17. A novel optical calorimetry dosimetry approach applied to an HDR Brachytherapy source

    NASA Astrophysics Data System (ADS)

    Cavan, A.; Meyer, J.

    2013-06-01

    The technique of Digital Holographic Interferometry (DHI) is applied to the measurement of radiation absorbed dose distribution in water. An optical interferometer has been developed that captures the small variations in the refractive index of water due to the radiation induced temperature increase ΔT. The absorbed dose D is then determined with high temporal and spatial resolution using the calorimetric relation D=cΔT (where c is the specific heat capacity of water). The method is capable of time resolving 3D spatial calorimetry. As a proof-of-principle of the approach, a prototype DHI dosimeter was applied to the measurement of absorbed dose from a High Dose Rate (HDR) Brachytherapy source. Initial results are in agreement with modelled doses from the Brachyvision treatment planning system, demonstrating the viability of the system for high dose rate applications. Future work will focus on applying corrections for heat diffusion and geometric effects. The method has potential to contribute to the dosimetry of diverse high dose rate applications which require high spatial resolution such as microbeam radiotherapy (MRT) or small field proton beam dosimetry but may potentially also be useful for interface dosimetry.

  18. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  19. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  20. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  1. Multiple response optimization applied to the development of a capillary electrophoretic method for pharmaceutical analysis.

    PubMed

    Candioti, Luciana Vera; Robles, Juan C; Mantovani, Víctor E; Goicoechea, Héctor C

    2006-03-15

    Multiple response simultaneous optimization by using the desirability function was used for the development of a capillary electrophoresis method for the simultaneous determination of four active ingredients in pharmaceutical preparations: vitamins B(6) and B(12), dexamethasone and lidocaine hydrochloride. Five responses were simultaneously optimized: the three resolutions, the analysis time and the capillary current. This latter response was taken into account in order to improve the quality of the separations. The separation was carried out by using capillary zone electrophoresis (CZE) with a silica capillary and UV detection (240 nm). The optimum conditions were: 57.0 mmol l(-1) sodium phosphate buffer solution, pH 7.0 and voltage=17.2 kV. Good results concerning precision (CV lower than 2%), accuracy (recoveries ranged between 98.5 and 102.6%) and selectivity were obtained in the concentration range studied for the four compounds. These results are comparable to those provided by the reference high performance liquid chromatography (HPLC) technique.

  2. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  3. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform).

  4. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  5. A Rational Approach to the Optimal Design of Drugs.

    DTIC Science & Technology

    1986-07-16

    Recent advances in the rational design of drug molecules based on a graph-theoretical approach are briefly reviewed. Graph theory has not been widely recognized to date as an effective alternative to the empirical procedures currently prevailing in the development of new drugs . Moreover, the problems confronting researchers in this field are daunting in their great complexity. We advocate here a novel yet simple mathematical formalism which opens up a promising

  6. Combining Exact and Heuristic Approaches for Discrete Optimization

    DTIC Science & Technology

    2009-02-18

    XPRESS . This success has stimulated the need for methodology to solve even much larger problems and the desire to solve problems in real-time. We...problems that cannot be solved to optimalitv using the leading commercial solvers such as CPLEX and XPRESS . These problems are either too large or too...an improved solution is found then Update the global solution end if end while The key to making this approach work is problem dependent. We

  7. Optimal control approach to termination of re-entry waves in cardiac electrophysiology

    PubMed Central

    Nagaiah, Chamakuri; Kunisch, Karl; Plank, Gernot

    2014-01-01

    This work proposes an optimal control approach for the termination of re-entry waves in cardiac electrophysiology. The control enters as an extracellular current density into the bidomain equations which are well established model equations in the literature to describe the electrical behavior of the cardiac tissue. The optimal control formulation is inspired, in part, by the dynamical systems behavior of the underlying system of differential equations. Existence of optimal controls is established and the optimality system is derived formally. The numerical realization is described in detail and numerical experiments, which demonstrate the capability of influencing and terminating reentry phenomena, are presented. PMID:22684847

  8. A practical approach for applying best practices in behavioural interventions to injury prevention

    PubMed Central

    Jacobsohn, Lela

    2010-01-01

    Behavioural science when combined with engineering, epidemiology and other disciplines creates a full picture of the often fragmented injury puzzle and informs comprehensive solutions. To assist efforts to include behavioural science in injury prevention strategies, this paper presents a methodological tutorial that aims to introduce best practices in behavioural intervention development and testing to injury professionals new to behavioural science. This tutorial attempts to bridge research to practice through the presentation of a practical, systematic, six-step approach that borrows from established frameworks in health promotion and disease prevention. Central to the approach is the creation of a programme theory that links a theoretically grounded, empirically tested behaviour change model to intervention components and their evaluation. Serving as a compass, a programme theory allows for systematic focusing of resources on the likely most potent behavioural intervention components and directs evaluation of intervention impact and implementation. For illustration, the six-step approach is applied to the creation of a new peer-to-peer campaign, Ride Like a Friend/Drive Like You Care, to promote safe teen driver and passenger behaviours. PMID:20363817

  9. Biodiversity in the context of ecosystem services: the applied need for systems approaches.

    PubMed

    Norris, Ken

    2012-01-19

    Recent evidence strongly suggests that biodiversity loss and ecosystem degradation continue. How might a systems approach to ecology help us better understand and address these issues? Systems approaches play a very limited role in the science that underpins traditional biodiversity conservation, but could provide important insights into mechanisms that affect population growth. This potential is illustrated using data from a critically endangered bird population. Although species-specific insights have practical value, the main applied challenge for a systems approach is to help improve our understanding of the role of biodiversity in the context of ecosystem services (ES) and the associated values and benefits people derive from these services. This has profound implications for the way we conceptualize and address ecological problems. Instead of focusing directly on biodiversity, the important response variables become measures of values and benefits, ES or ecosystem processes. We then need to understand the sensitivity of these variables to biodiversity change relative to other abiotic or anthropogenic factors, which includes exploring the role of variability at different levels of biological organization. These issues are discussed using the recent UK National Ecosystems Assessment as a framework.

  10. A simulation-optimization approach to retrieve reservoir releasing strategies under the trade-off objectives considering flooding, sedimentation, turbidity and water supply during typhoons

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; You, G. J. Y.

    2014-12-01

    This study develops a simulation-optimization approach for retrieving optimal multi-layer reservoir conjunctive release strategies considering the natural hazards of sedimentation, turbidity and flooding during typhoon invasion. The purposes of the developed approach are: (1) to apply WASP-based fluid dynamic sediment concentration simulation model and the developed extracting method of ideal releasing practice to search the optimal initial solution for optimization; and (2) to construct the replacing sediment concentration simulation model which embedded in the optimization model. In this study, the optimization model is solved by tabu search, and the optimized releasing hydrograph is then used for construction of the decision model. This study applies Adaptive Network-based Fuzzy Inference System (ANFIS) and Real-time Recurrent Learning Neural Network (RTRLNN) as construction tool of the concentration simulation model for total suspended solids. This developed approach is applied to the Shihmen Reservoir basin, Taiwan. The assessment index of operational outcome of multi-purpose multi-layer conjunctive releasing are maximum sediment concentration at Yuan-Shan weir, sediment removed ratio, highest water level at Shan-Yin Bridge, and final water level in Shihmen reservoir. The analyzed and optimizing results shows the following: (1) The multi-layer releasing during the stages before flood coming and before peak flow possess high potential for flood detention and sedimentation control; and during the stages after peak flow, for turbidity control and storage; (2) The ability of error toleration and adaption of ANFIS is superior, so ANFIS-based sediment concentration simulation model surpass RTRLNN-based model on simulating the mechanism and characteristics of sediment transport; and (3) The developed approach can effectively and automatically retrieve the optimal multi-layer releasing strategies under the trade-off control between flooding, sedimentation, turbidity

  11. Individualized prophylaxis for optimizing hemophilia care: can we apply this to both developed and developing nations?

    PubMed

    Poon, Man-Chiu; Lee, Adrienne

    2016-01-01

    Prophylaxis is considered optimal care for hemophilia patients to prevent bleeding and to preserve joint function thereby improving quality of life (QoL). The evidence for prophylaxis is irrefutable and is the standard of care in developed nations. Prophylaxis can be further individualized to improve outcomes and cost effectiveness. Individualization is best accomplished taking into account the bleeding phenotype, physical activity/lifestyle, joint status, and pharmacokinetic handling of specific clotting factor concentrates, all of which vary among individuals. Patient acceptance should also be considered. Assessment tools (e.g. joint status imaging and function studies/scores, QoL) for determining and monitoring risk factors and outcome, as well as population PK profiling have been developed to assist the individualization process. The determinants of optimal prophylaxis include (1) factor dose/dosing frequency, hence, cost/affordability (2) bleeding triggers (physical activity/lifestyle, chronic arthropathy and synovitis) and (3) bleeding rates. Altering one determinant results in adjustment of the other two. Thus, the trough level to protect from spontaneous bleeding can be increased in patients who have greater bleeding risks; and prophylaxis to achieve zero joint bleeds is achievable through optimal individualization. Prophylaxis in economically constrained nations is limited by the ill-affordability of clotting factor concentrates. However, at least 5 studies on children and adults from Thailand, China and India have shown superiority of low dose (~5-10 IU kg(-1) 2-3× per week) prophylaxis over episodic treatment in terms of bleed reduction, and quality of life, with improved physical activity, independent functioning, school attendance and community participation. In these nations, the prophylaxis goals should be for improved QoL rather than "zero bleeds" and perfect joints. Prophylaxis can still be individualized to affordability. Higher protective

  12. Optimizing physicians' instruction of PACS through e-learning: cognitive load theory applied.

    PubMed

    Devolder, P; Pynoo, B; Voet, T; Adang, L; Vercruysse, J; Duyck, P

    2009-03-01

    This article outlines the strategy used by our hospital to maximize the knowledge transfer to referring physicians on using a picture archiving and communication system (PACS). We developed an e-learning platform underpinned by the cognitive load theory (CLT) so that in depth knowledge of PACS' abilities becomes attainable regardless of the user's prior experience with computers. The application of the techniques proposed by CLT optimizes the learning of the new actions necessary to obtain and manipulate radiological images. The application of cognitive load reducing techniques is explained with several examples. We discuss the need to safeguard the physicians' main mental processes to keep the patient's interests in focus. A holistic adoption of CLT techniques both in teaching and in configuration of information systems could be adopted to attain this goal. An overview of the advantages of this instruction method is given both on the individual and organizational level.

  13. Applying Business Process Re-engineering Patterns to optimize WS-BPEL Workflows

    NASA Astrophysics Data System (ADS)

    Buys, Jonas; de Florio, Vincenzo; Blondia, Chris

    With the advent of XML-based SOA, WS-BPEL shortly turned out to become a widely accepted standard for modeling business processes. Though SOA is said to embrace the principle of business agility, BPEL process definitions are still manually crafted into their final executable version. While SOA has proven to be a giant leap forward in building flexible IT systems, this static BPEL workflow model is somewhat paradoxical to the need for real business agility and should be enhanced to better sustain continual process evolution. In this paper, we point out the potential of adding business intelligence with respect to business process re-engineering patterns to the system to allow for automatic business process optimization. Furthermore, we point out that BPR macro-rules could be implemented leveraging micro-techniques from computer science. We present some practical examples that illustrate the benefit of such adaptive process models and our preliminary findings.

  14. Optimization in multidimensional gas chromatography applying quantitative analysis via a stable isotope dilution assay.

    PubMed

    Schmarr, Hans-Georg; Slabizki, Petra; Legrum, Charlotte

    2013-08-01

    Trace level analyses in complex matrices benefit from heart-cut multidimensional gas chromatographic (MDGC) separations and quantification via a stable isotope dilution assay. Minimization of the potential transfer of co-eluting matrix compounds from the first dimension ((1)D) separation into the second dimension separation requests narrow cut-windows. Knowledge about the nature of the isotope effect in the separation of labeled and unlabeled compounds allows choosing conditions resulting in at best a co-elution situation in the (1)D separation. Since the isotope effect strongly depends on the interactions of the analytes with the stationary phase, an appropriate separation column polarity is mandatory for an isotopic co-elution. With 3-alkyl-2-methoxypyrazines and an ionic liquid stationary phase as an example, optimization of the MDGC method is demonstrated and critical aspects of narrow cut-window definition are discussed.

  15. Optimization of the Operation of Green Buildings applying the Facility Management

    NASA Astrophysics Data System (ADS)

    Somorová, Viera

    2014-06-01

    Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions

  16. Excited-State Geometry Optimization with the Density Matrix Renormalization Group, as Applied to Polyenes.

    PubMed

    Hu, Weifeng; Chan, Garnet Kin-Lic

    2015-07-14

    We describe and extend the formalism of state-specific analytic density matrix renormalization group (DMRG) energy gradients, first used by Liu et al. [J. Chem. Theor. Comput. 2013, 9, 4462]. We introduce a DMRG wave function maximum overlap following technique to facilitate state-specific DMRG excited-state optimization. Using DMRG configuration interaction (DMRG-CI) gradients, we relax the low-lying singlet states of a series of trans-polyenes up to C20H22. Using the relaxed excited-state geometries, as well as correlation functions, we elucidate the exciton, soliton, and bimagnon ("single-fission") character of the excited states, and find evidence for a planar conical intersection.

  17. Optimal numerical parameterization of discontinuous Galerkin method applied to wave propagation problems

    SciTech Connect

    Chevaugeon, Nicolas . E-mail: chevaugeon@gce.ucl.ac.be; Hillewaert, Koen; Gallez, Xavier; Ploumhans, Paul; Remacle, Jean-Francois . E-mail: remacle@gce.ucl.ac.be

    2007-04-10

    This paper deals with the high-order discontinuous Galerkin (DG) method for solving wave propagation problems. First, we develop a one-dimensional DG scheme and numerically compute dissipation and dispersion errors for various polynomial orders. An optimal combination of time stepping scheme together with the high-order DG spatial scheme is presented. It is shown that using a time stepping scheme with the same formal accuracy as the DG scheme is too expensive for the range of wave numbers that is relevant for practical applications. An efficient implementation of a high-order DG method in three dimensions is presented. Using 1D convergence results, we further show how to adequately choose elementary polynomial orders in order to equi-distribute a priori the discretization error. We also show a straightforward manner to allow variable polynomial orders in a DG scheme. We finally propose some numerical examples in the field of aero-acoustics.

  18. Norm-Optimal ILC Applied to a High-Speed Rack Feeder

    NASA Astrophysics Data System (ADS)

    Schindele, Dominik; Aschemann, Harald; Ritzke, Jöran

    2010-09-01

    Rack feeders as automated conveying systems for high bay rackings are of high practical importance. To shorten the transport times by using trajectories with increased kinematic values accompanying control measures for a reduction of the excited structural vibrations are necessary. In this contribution, the model-based design of a norm-optimal iterative learning control structure is presented. The rack feeder is modelled as an elastic multibody system. For the mathematical description of the bending deflections a Ritz ansatz is introduced. The tracking control design is performed separately for both axes using decentralised state space representations. Both the achievable performance and the resulting tracking accuracy of the proposed control concept are shown by measurement results from the experimental set-up.

  19. A moment-based approach for DVH-guided radiotherapy treatment plan optimization

    NASA Astrophysics Data System (ADS)

    Zarepisheh, M.; Shakourifar, M.; Trigila, G.; Ghomi, P. S.; Couzens, S.; Abebe, A.; Noreña, L.; Shang, W.; Jiang, Steve B.; Zinchenko, Y.

    2013-03-01

    The dose-volume histogram (DVH) is a clinically relevant criterion to evaluate the quality of a treatment plan. It is hence desirable to incorporate DVH constraints into treatment plan optimization for intensity modulated radiation therapy. Yet, the direct inclusion of the DVH constraints into a treatment plan optimization model typically leads to great computational difficulties due to the non-convex nature of these constraints. To overcome this critical limitation, we propose a new convex-moment-based optimization approach. Our main idea is to replace the non-convex DVH constraints by a set of convex moment constraints. In turn, the proposed approach is able to generate a Pareto-optimal plan whose DVHs are close to, or if possible even outperform, the desired DVHs. In particular, our experiment on a prostate cancer patient case demonstrates the effectiveness of this approach by employing two and three moment formulations to approximate the desired DVHs.

  20. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  1. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-01-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  2. Contemporary nutrition approaches to optimize elite marathon performance.

    PubMed

    Stellingwerff, Trent

    2013-09-01

    The professionalization of any sport must include an appreciation for how and where nutrition can positively affect training adaptation and/or competition performance. Furthermore, there is an ever-increasing importance of nutrition in sports that feature very high training volumes and are of a long enough duration that both glycogen and fluid balance can limit performance. Indeed, modern marathon training programs and racing satisfy these criteria and are uniquely suited to benefit from nutritional interventions. Given that muscle glycogen is limiting during a 2-h marathon, optimizing carbohydrate (CHO) intake and delivery is of maximal importance. Furthermore, the last 60 y of marathon performance have seen lighter and smaller marathoners, which enhances running economy and heat dissipation and increases CHO delivery per kg body mass. Finally, periodically training under conditions of low CHO availability (eg, low muscle glycogen) or periods of mild fluid restriction may actually further enhance the adaptive responses to training. Accordingly, this commentary highlights these key nutrition and hydration interventions that have emerged over the last several years and explores how they may assist in world-class marathon performance.

  3. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  4. Modern Optimal Control Methods Applied in Active Control of a Tetrahedron.

    DTIC Science & Technology

    1980-12-01

    1 -4 44 1111 -tW, -- Sw- op VNI V 9 Z4 1 M , Nt m 211 W- 7 NZE W17 gn Y 1 ;z" N, A Y.7 4", r6 19...TOSFEB 1 0 1981 -~j 0"?TO EHD t APPLIED IN CTIVE 9ONTROL OFA TETRAHEDRON. THESIS AFIT/GA/A/8OP2"-Alan x /Janiszewski Approved for public release... 1 II. System Model......................5 General Configuration ................ Equations of Motion ................... 10 Modal

  5. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  6. Optimal indolence: a normative microscopic approach to work and leisure

    PubMed Central

    Niyogi, Ritwik K.; Breton, Yannick-Andre; Solomon, Rebecca B.; Conover, Kent; Shizgal, Peter; Dayan, Peter

    2014-01-01

    Dividing limited time between work and leisure when both have their attractions is a common everyday decision. We provide a normative control-theoretic treatment of this decision that bridges economic and psychological accounts. We show how our framework applies to free-operant behavioural experiments in which subjects are required to work (depressing a lever) for sufficient total time (called the price) to receive a reward. When the microscopic benefit-of-leisure increases nonlinearly with duration, the model generates behaviour that qualitatively matches various microfeatures of subjects’ choices, including the distribution of leisure bout durations as a function of the pay-off. We relate our model to traditional accounts by deriving macroscopic, molar, quantities from microscopic choices. PMID:24284898

  7. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  8. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  9. Applying a radiomics approach to predict prognosis of lung cancer patients

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Yan, Shiju; Wang, Yunzhi; Qian, Wei; Guan, Yubao; Zheng, Bin

    2016-03-01

    Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS ("yes" and "no") groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients < 0.5 including between ERCC1 and RRM1 biomarkers). By using area under ROC curve as an assessment index, the QI, SR and GB based classifiers yielded AUC = 0.89+/-0.04, 0.73+/-0.06 and 0.76+/-0.07, respectively, which showed that all three types of features had prediction power (AUC>0.5). Among them, using QI yielded the highest performance.

  10. A practical approach to near time-optimal inspection-task-sequence planning for two cooperative industrial robot arms

    SciTech Connect

    Cao, B.; Dodds, G.I.; Irwin, G.W.

    1998-08-01

    A near time-optimal inspection-task-sequence planning for two cooperative industrial robots is outlined. The objective of the task-sequence planning is not only to find a series of near time-optimal final configurations for two arms where the inspection operations are undertaken for segment motions, but also to find a near time-optimal task sequence of inspection points. A time-efficient, continuous joint-acceleration profile is proposed for a class of general industrial robots, and simplified by suitably choosing the time intervals of the profile for each segment motion between any two points to be inspected. The optimization problem to find near time-optimal final configurations is solved using the nonlinear optimization method of sequential quadratic programming (SQP). The computation overhead arising with this approach is successfully dealt with. The task-sequence planning is formulated as a variation of the travelling salesman problem, and simulated annealing is used of find a near time-optimal route. The near time-optimal task-sequence planning and time-efficient trajectory planning are effectively integrated with the related computational problems addressed and solved. The proposed method is applied to an environment containing two RTX SCARA-type industrial robots with six joints. Computations have been carried out for moving points in a route, as compared with the same fixed points in a route where only one arm moves. The conclusion is that it is much more efficient if two robot arms work in a cooperative mode. A speed increase by a factor of close to 3 has been achieved through effective use of joint capabilities in the cooperative system. Experiments have also been conducted on the RTX system, yielding satisfactory results that are consistent with those obtained by simulation.

  11. [Optimization of organizational approaches to management of patients with atherosclerosis].

    PubMed

    Barbarash, L S; Barbarash, O L; Artamonova, G V; Sumin, A N

    2014-01-01

    Despite undoubted achievements of modern cardiology in prevention and treatment of atherosclerosis, cardiologists, neurologists, and vascular surgeons are still facing severe stenotic atherosclerotic lesions in different vascular regions, both symptomatic and asymptomatic. As a rule hemodynamically significant stenoses of different locations are found after development of acute vascular events. In this regard, active detection of arterial stenoses localized in different areas just at primary contact of patients presenting with symptoms of ischemia of various locations with care providers appears to be crucial. Further monitoring of these stenoses is also important. The article is dedicated to innovative organizational approaches to provision of healthcare to patients suffering from circulatory system diseases that have contributed to improvement of demographic situation in Kuzbass.

  12. An extended master-equation approach applied to aggregation in freeway traffic

    NASA Astrophysics Data System (ADS)

    Li, Jun-Wei; Lin, Bo-Liang; Huang, Yong-Chang

    2008-02-01

    We restudy the master-equation approach applied to aggregation in a one-dimensional freeway, where the decay transition probabilities for the jump processes are reconstructed based on a car-following model. According to the reconstructed transition probabilities, the clustering behaviours and the stochastic properties of the master equation in a one-lane freeway traffic model are investigated in detail. The numerical results show that the size of the clusters initially below the critical size of the unstable cluster and initially above that of the unstable cluster all enter the same stable state, which also accords with the nucleation theory and is known from the result in earlier work. Moreover, we have obtained more reasonable parameters of the master equation based on some results of cellular automata models.

  13. The active analog approach applied to the pharmacophore identification of benzodiazepine receptor ligands

    NASA Astrophysics Data System (ADS)

    Tebib, Souhail; Bourguignon, Jean-Jacques; Wermuth, Camille-Georges

    1987-07-01

    Applied to seven potent benzodiazepine-receptor ligands belonging to chemically different classes, the active analog approach allowed the stepwise identification of the pharmacophoric pattern associated with the recognition by the benzodiazepine receptor. A unique pharmacophore model was derived which involves six critical zones: (a) a π-electron rich aromatic (PAR) zone; (b) two electron-rich zones δ1 and δ2 placed at 5.0 and 4.5 Å respectively from the reference centroid in the PAR zone; (c) a freely rotating aromatic ring (FRA) region; (d) an out-of-plane region (OPR), strongly associated with agonist properties; and (e) an additional hydrophobic region (AHR). The model accommodates all presently known ligands of the benzodiazepine receptor, identifies sensitivity to steric hindrance close to the δ1 zone, accounts for R and S differential affinities and distinguishes requirements for agonist versus non-agonist activity profiles.

  14. A corpus driven approach applying the "frame semantic" method for modeling functional status terminology.

    PubMed

    Ruggieri, Alexander P; Pakhomov, Serguei V; Chute, Christopher G

    2004-01-01

    In an effort to unearth semantic models that could prove fruitful to functional-status terminology development we applied the "frame semantic" method, derived from the linguistic theory of thematic roles currently exemplified in the Berkeley "FrameNet" Project. Full descriptive sentences with functional-status conceptual meaning were derived from structured content within a corpus of questionnaire assessment instruments commonly used in clinical practice for functional-status assessment. Syntactic components in those sentences were delineated through manual annotation and mark-up. The annotated syntactic constituents were tagged as frame elements according to their semantic role within the context of the derived functional-status expression. Through this process generalizable "semantic frames" were elaborated with recurring "frame elements". The "frame semantic" method as an approach to rendering semantic models for functional-status terminology development and its use as a basis for machine recognition of functional status data in clinical narratives are discussed.

  15. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  16. Experiences in applying optimization techniques to configurations for the Control of Flexible Structures (COFS) program

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1989-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.

  17. Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.

  18. Factorial design applied to the optimization of lipid composition of topical antiherpetic nanoemulsions containing isoflavone genistein

    PubMed Central

    Argenta, Débora Fretes; de Mattos, Cristiane Bastos; Misturini, Fabíola Dallarosa; Koester, Leticia Scherer; Bassani, Valquiria Linck; Simões, Cláudia Maria Oliveira; Teixeira, Helder Ferreira

    2014-01-01

    The aim of this study was to optimize topical nanoemulsions containing genistein, by means of a 23 full factorial design based on physicochemical properties and skin retention. The experimental arrangement was constructed using oil type (isopropyl myristate or castor oil), phospholipid type (distearoylphosphatidylcholine [DSPC] or dioleylphosphaditylcholine [DOPC]), and ionic cosurfactant type (oleic acid or oleylamine) as independent variables. The analysis of variance showed effect of third order for particle size, polydispersity index, and skin retention of genistein. Nanoemulsions composed of isopropyl myristate/DOPC/oleylamine showed the smallest diameter and highest genistein amount in porcine ear skin whereas the formulation composed of isopropyl myristate/DSPC/oleylamine exhibited the lowest polydispersity index. Thus, these two formulations were selected for further studies. The formulations presented positive ζ potential values (>25 mV) and genistein content close to 100% (at 1 mg/mL). The incorporation of genistein in nanoemulsions significantly increased the retention of this isoflavone in epidermis and dermis, especially when the formulation composed by isopropyl myristate/DOPC/oleylamine was used. These results were supported by confocal images. Such formulations exhibited antiherpetic activity in vitro against herpes simplex virus 1 (strain KOS) and herpes simplex virus 22 (strain 333). Taken together, the results show that the genistein-loaded nanoemulsions developed in this study are promising options in herpes treatment. PMID:25336951

  19. Applying AN Integrated Route Optimization Method as a Solution to the Problem of Waste Collection

    NASA Astrophysics Data System (ADS)

    Salleh, A. H.; Ahamad, M. S. S.; Yusoff, M. S.

    2016-09-01

    Solid waste management (SWM) is very subjective to budget control where the utmost expenses are devoted to the waste collection's travel route. The common understanding of the travel route in SWM is that shorter route is cheaper. However, in reality it is not necessarily true as the SWM compactor truck is affected by various aspects which leads to higher fuel consumption. Thus, this ongoing research introduces a solution to the problem using multiple criteria route optimization process integrated with AHP/GIS as its main analysis tools. With the criteria obtained from the idea that leads to higher fuel consumption based on road factors, road networks and human factors. The weightage of criteria is obtained from the combination of AHP with the distance of multiple shortest routes obtained from GIS. A solution of most optimum routes is achievable and comparative analysis with the currently used route by the SWM compactor truck can be compared. It is expected that the decision model will be able to solve the global and local travel route problem in MSW.

  20. Mechanism analysis of Magnetohydrodynamic heat shield system and optimization of externally applied magnetic field

    NASA Astrophysics Data System (ADS)

    Li, Kai; Liu, Jun; Liu, Weiqiang

    2017-04-01

    As a novel thermal protection technique for hypersonic vehicles, Magnetohydrodynamic (MHD) heat shield system has been proved to be of great intrinsic value in the hypersonic field. In order to analyze the thermal protection mechanisms of such a system, a physical model is constructed for analyzing the effect of the Lorentz force components in the counter and normal directions. With a series of numerical simulations, the dominating Lorentz force components are analyzed for the MHD heat flux mitigation in different regions of a typical reentry vehicle. Then, a novel magnetic field with variable included angle between magnetic induction line and streamline is designed, which significantly improves the performance of MHD thermal protection in the stagnation and shoulder areas. After that, the relationships between MHD shock control and MHD thermal protection are investigated, based on which the magnetic field above is secondarily optimized obtaining better performances of both shock control and thermal protection. Results show that the MHD thermal protection is mainly determined by the Lorentz force's effect on the boundary layer. From the stagnation to the shoulder region, the flow deceleration effect of the counter-flow component is weakened while the flow deflection effect of the normal component is enhanced. Moreover, there is no obviously positive correlation between the MHD shock control and thermal protection. But once a good Lorentz force's effect on the boundary layer is guaranteed, the thermal protection performance can be further improved with an enlarged shock stand-off distance by strengthening the counter-flow Lorentz force right after shock.

  1. Operative terminology and post-operative management approaches applied to hepatic surgery: Trainee perspectives

    PubMed Central

    Farid, Shahid G; Prasad, K Rajendra; Morris-Stiff, Gareth

    2013-01-01

    Outcomes in hepatic resectional surgery (HRS) have improved as a result of advances in the understanding of hepatic anatomy, improved surgical techniques, and enhanced peri-operative management. Patients are generally cared for in specialist higher-level ward settings with multidisciplinary input during the initial post-operative period, however, greater acceptance and understanding of HRS has meant that care is transferred, usually after 24-48 h, to a standard ward environment. Surgical trainees will be presented with such patients either electively as part of a hepatobiliary firm or whilst covering the service on-call, and it is therefore important to acknowledge the key points in managing HRS patients. Understanding the applied anatomy of the liver is the key to determining the extent of resection to be undertaken. Increasingly, enhanced patient pathways exist in the post-operative setting requiring focus on the delivery of high quality analgesia, careful fluid balance, nutrition and thromboprophlaxis. Complications can occur including liver, renal and respiratory failure, hemorrhage, and sepsis, all of which require prompt recognition and management. We provide an overview of the relevant terminology applied to hepatic surgery, an approach to the post-operative management, and an aid to developing an awareness of complications so as to facilitate better confidence in this complex subgroup of general surgical patients. PMID:23710292

  2. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  3. Operative terminology and post-operative management approaches applied to hepatic surgery: Trainee perspectives.

    PubMed

    Farid, Shahid G; Prasad, K Rajendra; Morris-Stiff, Gareth

    2013-05-27

    Outcomes in hepatic resectional surgery (HRS) have improved as a result of advances in the understanding of hepatic anatomy, improved surgical techniques, and enhanced peri-operative management. Patients are generally cared for in specialist higher-level ward settings with multidisciplinary input during the initial post-operative period, however, greater acceptance and understanding of HRS has meant that care is transferred, usually after 24-48 h, to a standard ward environment. Surgical trainees will be presented with such patients either electively as part of a hepatobiliary firm or whilst covering the service on-call, and it is therefore important to acknowledge the key points in managing HRS patients. Understanding the applied anatomy of the liver is the key to determining the extent of resection to be undertaken. Increasingly, enhanced patient pathways exist in the post-operative setting requiring focus on the delivery of high quality analgesia, careful fluid balance, nutrition and thromboprophlaxis. Complications can occur including liver, renal and respiratory failure, hemorrhage, and sepsis, all of which require prompt recognition and management. We provide an overview of the relevant terminology applied to hepatic surgery, an approach to the post-operative management, and an aid to developing an awareness of complications so as to facilitate better confidence in this complex subgroup of general surgical patients.

  4. Mathematic simulation of soil-vegetation condition and land use structure applying basin approach

    NASA Astrophysics Data System (ADS)

    Mishchenko, Natalia; Shirkin, Leonid; Krasnoshchekov, Alexey

    2016-04-01

    Ecosystems anthropogenic transformation is basically connected to the changes of land use structure and human impact on soil fertility. The Research objective is to simulate the stationary state of river basins ecosystems. Materials and Methods. Basin approach has been applied in the research. Small rivers basins of the Klyazma river have been chosen as our research objects. They are situated in the central part of the Russian plain. The analysis is carried out applying integrated characteristics of ecosystems functioning and mathematic simulation methods. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Results. Mathematic simulation resulted in defining possible permanent conditions of "phytocenosis-soil" system in coordinates of phytomass, phytoproductivity, humus percentage in soil. Ecosystem productivity is determined not only by vegetation photosynthesis activity but also by the area ratio of forest and meadow phytocenosis. Local maximums attached to certain phytomass areas and humus content in soil have been defined on the basin phytoproductivity distribution diagram. We explain the local maximum by synergetic effect. It appears with the definite ratio of forest and meadow phytocenosis. In this case, utmost values of phytomass for the whole area are higher than just a sum of utmost values of phytomass for the forest and meadow phytocenosis. Efficient correlation of natural forest and meadow phytocenosis has been defined for the Klyazma river. Conclusion. Mathematic simulation methods assist in forecasting the ecosystem conditions under various changes of land use structure. Nowadays overgrowing of the abandoned agricultural lands is very actual for the Russian Federation. Simulation results demonstrate that natural ratio of forest and meadow phytocenosis for the area will restore during agricultural overgrowing.

  5. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter.

  6. Applying operations research to optimize a novel population management system for cancer screening

    PubMed Central

    Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J

    2014-01-01

    Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318

  7. Quantum Operator Approach Applied to the Position-Dependent Mass Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Ovando, G.; Peña, J. J.; Morales, J.

    2014-03-01

    In this work, the quantum operator approach is applied to both, the position-dependent mass Schrödinger equation (PDMSE) and the Schrodinger equation with constant mass (CMSE). This fact enable us to find the factorization operators that relates both Hamiltonians by means of a kinetic energy operator that comes from the proposal of Morrow and Brownstein. With this approach is possible to find the exactly-solvable PDMSE, for any value of the parameters α and γ in the von Roos's Hamiltonian. For that, our proposal can be considered as a unified treatment of the PDMSE because it contains as particular cases, the kinetic energy operators of various authors such as BenDaniel-Duke, Gora-Williams, Zhu-Kroemer and Li-Kuhn among others. To show the usefulness of our result, we show the solvable PDMSE that comes from the harmonic oscillator potential model for the CMSE. The proposal is general and can easily be extended to other potential models and mass distributions which will be given in the extended paper.

  8. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    PubMed Central

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina’s HiSeq and MiSeq, Roche’s 454 GS FLX, and Life Technologies’ Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

  9. Integration of kinetic modeling and desirability function approach for multi-objective optimization of UASB reactor treating poultry manure wastewater.

    PubMed

    Yetilmezsoy, Kaan

    2012-08-01

    An integrated multi-objective optimization approach within the framework of nonlinear regression-based kinetic modeling and desirability function was proposed to optimize an up-flow anaerobic sludge blanket (UASB) reactor treating poultry manure wastewater (PMW). Chen-Hashimoto and modified Stover-Kincannon models were applied to the UASB reactor for determination of bio-kinetic coefficients. A new empirical formulation of volumetric organic loading rate was derived for the first time for PMW to estimate the dimensionless kinetic parameter (K) in the Chen-Hashimoto model. Maximum substrate utilization rate constant and saturation constant were predicted as 11.83 g COD/L/day and 13.02 g COD/L/day, respectively, for the modified Stover-Kincannon model. Based on four process-related variables, three objective functions including a detailed bio-economic model were derived and optimized by using a LOQO/AMPL algorithm, with a maximum overall desirability of 0.896. The proposed optimization scheme demonstrated a useful tool for the UASB reactor to optimize several responses simultaneously.

  10. Approaches for optimizing the calibration standard of Tewameter TM 300.

    PubMed

    Miteva, Maria; Richter, Stefan; Elsner, Peter; Fluhr, Joachim W

    2006-11-01

    Calibration of devices measuring transepidermal water loss (TEWL) is in intensive discussion. Comparative studies revealed that comparable measuring systems, e.g. open and closed chamber systems, do not always deliver the same results, even when expressing the measured values in SI units, namely in g/m(2)/h. Therefore, adequate and reliable calibration procedures need to be established. We were able to test the reliability of a multi-step calibration algorithm for an open chamber system such as Tewameter TM 300. In order to achieve reliable measurements, the maintenance of stable microclimate conditions without air turbulences is mandatory. The TEWL values should be compared with those determined gravimetrically on heated skin simulators. The reproducibility of the results is warranted by consecutive measurements on different adjacent spots of a defined area. Preheating of the probe sensors is an effective approach for shortening the measuring time and gaining a rapid steady-state. The accurate calibration of the probe can be checked under laboratory conditions any time. The critical point of the calibration and ultimately the accuracy of in vivo measurements maintain the steady functional capacity of the probes during the entire duration of continuous studies. The studied calibration procedure ensures these requirements.

  11. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  12. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  13. A sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders.

    PubMed

    Torri, L; Piochi, M; Marchiani, R; Zeppa, G; Dinnella, C; Monteleone, E

    2016-01-01

    The present study aimed to present a sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders (GSP). The combined sensory evaluation approach, involving a descriptive and an affective test, respectively, was applied to evaluate the effect of the addition of grape skin powders from 2 grape varieties (Barbera and Chardonnay) at different levels [0.8, 1.6, and 2.4%; weight (wt) powder/wt curd] on the sensory properties and consumer acceptability of innovative soft cow milk cheeses. The experimental plan envisaged 7 products, 6 fortified prototypes (at rates of Barbera and Chardonnay of 0.8, 1.6, and 2.4%) and a control sample, with 1 wk of ripening. By means of a free choice profile, 21 cheese experts described the sensory properties of prototypes. A central location test with 90 consumers was subsequently conducted to assess the acceptability of samples. The GSP enrichment strongly affected the sensory properties of innovative products, mainly in terms of appearance and texture. Fortified samples were typically described with a marbling aspect (violet or brown as function of the grape variety) and with an increased granularity, sourness, saltiness, and astringency. The fortification also contributed certain vegetable sensations perceived at low intensity (grassy, cereal, nuts), and some potential negative sensations (earthy, animal, winy, varnish). The white color, the homogeneous dough, the compact and elastic texture, and the presence of lactic flavors resulted the positive drivers of preference. On the contrary, the marbling aspect, granularity, sandiness, sourness, saltiness, and astringency negatively affected the cheese acceptability for amounts of powder, exceeding 0.8 and 1.6% for the Barbera and Chardonnay prototypes, respectively. Therefore, the amount of powder resulted a critical parameter for liking of fortified cheeses and a discriminant between the 2 varieties. Reducing the GSP particle size and improving the GSP

  14. Geometry Control System for Exploratory Shape Optimization Applied to High-Fidelity Aerodynamic Design of Unconventional Aircraft

    NASA Astrophysics Data System (ADS)

    Gagnon, Hugo

    This thesis represents a step forward to bring geometry parameterization and control on par with the disciplinary analyses involved in shape optimization, particularly high-fidelity aerodynamic shape optimization. Central to the proposed methodology is the non-uniform rational B-spline, used here to develop a new geometry generator and geometry control system applicable to the aerodynamic design of both conventional and unconventional aircraft. The geometry generator adopts a component-based approach, where any number of predefined but modifiable (parametric) wing, fuselage, junction, etc., components can be arbitrarily assembled to generate the outer mold line of aircraft geometry. A unique Python-based user interface incorporating an interactive OpenGL windowing system is proposed. Together, these tools allow for the generation of high-quality, C2 continuous (or higher), and customized aircraft geometry with fast turnaround. The geometry control system tightly integrates shape parameterization with volume mesh movement using a two-level free-form deformation approach. The framework is augmented with axial curves, which are shown to be flexible and efficient at parameterizing wing systems of arbitrary topology. A key aspect of this methodology is that very large shape deformations can be achieved with only a few, intuitive control parameters. Shape deformation consumes a few tenths of a second on a single processor and surface sensitivities are machine accurate. The geometry control system is implemented within an existing aerodynamic optimizer comprising a flow solver for the Euler equations and a sequential quadratic programming optimizer. Gradients are evaluated exactly with discrete-adjoint variables. The algorithm is first validated by recovering an elliptical lift distribution on a rectangular wing, and then demonstrated through the exploratory shape optimization of a three-pronged feathered winglet leading to a span efficiency of 1.22 under a height

  15. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    PubMed

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others.

  16. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr

    2012-11-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.

  17. Applying the Principles of Systems Engineering and Project Management to Optimize Scientific Research

    NASA Astrophysics Data System (ADS)

    Peterkin, Adria J.

    2016-01-01

    Systems Engineering is an interdisciplinary practice that analyzes different facets of a suggested area to properly develop and design an efficient system guided by the principles and restrictions of the science community. When entering an institution with quantitative and analytical scientific theory it is important to make sure that all parts of a system correlates in a structured and systematic manner so that all areas of intricacy will be prevented or quickly deduced. My research focused on interpreting and implementing Systems Engineering techniques in the construction, integration and operation of a NASA Radio Jove Kit to Observe Jupiter radio emissions. Jupiter emissions read at very low frequencies so when building the telescope it had to be able to read less than 39.5 MHz. The projected outcome was to receive long L-bursts and short S-burts signals; however, during the time of observation Jupiter was in conjunction with the Sun. We then decided to use the receiver built from the NASA Radio Jove Kit to hook it up to the Karl Jansky telescope to make an effort to listen to solar flares as well, nonetheless, we were unable to identify these signals and further realized they were noise. The overall project was a success in that we were able to apply and comprehend, the principles of Systems Engineering to facilitate the build.

  18. Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach

    NASA Technical Reports Server (NTRS)

    Jacobs, Derya, A.; Aasen, Curtis A.

    2000-01-01

    Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the

  19. Crossover versus mutation: a comparative analysis of the evolutionary strategy of genetic algorithms applied to combinatorial optimization problems.

    PubMed

    Osaba, E; Carballedo, R; Diaz, F; Onieva, E; de la Iglesia, I; Perallos, A

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.

  20. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    PubMed Central

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  1. Push-through direct injection NMR: an optimized automation method applied to metabolomics.

    PubMed

    Teng, Quincy; Ekman, Drew R; Huang, Wenlin; Collette, Timothy W

    2012-05-07

    There is a pressing need to increase the throughput of NMR analysis in fields such as metabolomics and drug discovery. Direct injection (DI) NMR automation is recognized to have the potential to meet this need due to its suitability for integration with the 96-well plate format. However, DI NMR has not been widely used as a result of some insurmountable technical problems; namely: carryover contamination, sample diffusion (causing reduction of spectral sensitivity), and line broadening caused by entrapped air bubbles. Several variants of DI NMR, such as flow injection analysis (FIA) and microflow NMR, have been proposed to address one or more of these issues, but not all of them. The push-through direct injection technique reported here overcomes all of these problems. The method recovers samples after NMR analysis, uses a "brush-wash" routine to eliminate carryover, includes a procedure to push wash solvent out of the flow cell via the outlet to prevent sample diffusion, and employs an injection valve to avoid air bubbles. Herein, we demonstrate the robustness, efficiency, and lack of carryover characteristics of this new method, which is ideally suited for relatively high throughput analysis of the complex biological tissue extracts used in metabolomics, as well as many other sample types. While simple in concept and setup, this new method provides a substantial improvement over current approaches.

  2. Medium optimization of protease production by Brevibacterium linens DSM 20158, using statistical approach

    PubMed Central

    Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H.M.

    2012-01-01

    Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett–Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium. PMID:24031928

  3. A continuous linear optimal transport approach for pattern analysis in image datasets

    PubMed Central

    Kolouri, Soheil; Tosun, Akif B.; Ozolek, John A.; Rohde, Gustavo K.

    2015-01-01

    We present a new approach to facilitate the application of the optimal transport metric to pattern recognition on image databases. The method is based on a linearized version of the optimal transport metric, which provides a linear embedding for the images. Hence, it enables shape and appearance modeling using linear geometric analysis techniques in the embedded space. In contrast to previous work, we use Monge's formulation of the optimal transport problem, which allows for reasonably fast computation of the linearized optimal transport embedding for large images. We demonstrate the application of the method to recover and visualize meaningful variations in a supervised-learning setting on several image datasets, including chromatin distribution in the nuclei of cells, galaxy morphologies, facial expressions, and bird species identification. We show that the new approach allows for high-resolution construction of modes of variations and discrimination and can enhance classification accuracy in a variety of image discrimination problems. PMID:26858466

  4. Medium optimization of protease production by Brevibacterium linens DSM 20158, using statistical approach.

    PubMed

    Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H M

    2012-07-01

    Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett-Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium.

  5. Biologically optimized helium ion plans: calculation approach and its in vitro validation.

    PubMed

    Mairani, A; Dokic, I; Magro, G; Tessonnier, T; Kamp, F; Carlson, D J; Ciocca, M; Cerutti, F; Sala, P R; Ferrari, A; Böhlen, T T; Jäkel, O; Parodi, K; Debus, J; Abdollahi, A; Haberer, T

    2016-06-07

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary (4)He ion beams, of the secondary (3)He and (4)He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by [Formula: see text] Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ([Formula: see text]) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. [Formula: see text] values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.

  6. Biologically optimized helium ion plans: calculation approach and its in vitro validation

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.

    2016-06-01

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.

  7. [Design space approach to optimize first ethanol precipitation process of Dangshen].

    PubMed

    Xu, Zhi-lin; Huang, Wen-hua; Gong, Xing-chu; Ye, Tian-tian; Qu, Hai-bin; Song, Yan-gang; Hu, Dong-lai; Wang, Guo-xiang

    2015-11-01

    Design space approach is applied in this study to enhance the robustness of first ethanol precipitation process of Codonopsis Radix (Dangshen) by optimizing parameters. Total flavonoid recovery, dry matter removal, and pigment removal were defined as the process critical quality attributes (CQAs). Plackett-Burman designed experiments were carried out to find the critical process parameters (CPPs). Dry matter content of concentrated extract (DMCE), mass ratio of ethanol to concentrated extract (E/C ratio) and concentration of ethanol (CEA) were identified as the CPPs. Box-Behnken designed experiments were performed to establish the quantitative models between CPPs and CQAs. Probability based design space was obtained and verified using Monte-Carlo simulation method. According to the verification results, the robustness of first ethanol precipitation process of Dangshen can be guaranteed by operating within the design space parameters. Recommended normal operation space are as follows: dry matter content of concentrated extract of 45.0% - 48.0%, E/C ratio of 2.48-2.80 g x g(-1), and the concentration of ethanol of 92.0% - 92.7%.

  8. A nonlinear optimization approach for disturbance rejection in flexible space structures

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Sunkel, John W.

    1990-01-01

    In this paper the design of an active control law for the rejection of persistent disturbances in large space structures is presented. The control system design approach is based on a deterministic model of the disturbances, with a model-based-compensator (MBC) structure, optimizing the magnitude of the disturbance that the structure can tolerate without violating certain predetermined constraints. In addition to closed-loop stability, the explicit treatment of state, control and control rate constraints, such as structural displacement, control actuator effort, and compensator time guarantees that the final design will exhibit desired performance characteristics. The technique is applied for the vibration damping of a simple two bay truss structure which is subjected to persistent disturbances, such as shuttle docking. Preliminary results indicate that the proposed control system can reject considerable persistent disturbances by utilizing most of the available control, while limiting the structural displacements to within desired tolerances. Further work, however, for incorporating additional design criteria, such as compensator robustness to be traded-off against performance specifications, is warranted.

  9. Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process

    PubMed Central

    Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana

    2012-01-01

    The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors. PMID:22919295

  10. A Kriging surrogate model coupled in simulation-optimization approach for identifying release history of groundwater sources

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Lu, Wenxi; Xiao, Chuanning

    2016-02-01

    As the incidence frequency of groundwater pollution increases, many methods that identify source characteristics of pollutants are being developed. In this study, a simulation-optimization approach was applied to determine the duration and magnitude of pollutant sources. Such problems are time consuming because thousands of simulation models are required to run the optimization model. To address this challenge, the Kriging surrogate model was proposed to increase computational efficiency. Accuracy, time consumption, and the robustness of the Kriging model were tested on both homogenous and non-uniform media, as well as steady-state and transient flow and transport conditions. The results of three hypothetical cases demonstrate that the Kriging model has the ability to solve groundwater contaminant source problems that could occur during field site source identification problems with a high degree of accuracy and short computation times and is thus very robust.

  11. A Kriging surrogate model coupled in simulation-optimization approach for identifying release history of groundwater sources.

    PubMed

    Zhao, Ying; Lu, Wenxi; Xiao, Chuanning

    2016-01-01

    As the incidence frequency of groundwater pollution increases, many methods that identify source characteristics of pollutants are being developed. In this study, a simulation-optimization approach was applied to determine the duration and magnitude of pollutant sources. Such problems are time consuming because thousands of simulation models are required to run the optimization model. To address this challenge, the Kriging surrogate model was proposed to increase computational efficiency. Accuracy, time consumption, and the robustness of the Kriging model were tested on both homogenous and non-uniform media, as well as steady-state and transient flow and transport conditions. The results of three hypothetical cases demonstrate that the Kriging model has the ability to solve groundwater contaminant source problems that could occur during field site source identification problems with a high degree of accuracy and short computation times and is thus very robust.

  12. A new dynamic approach for statistical optimization of GNSS radio occultation bending angles for optimal climate monitoring utility

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Wu, S.; Schwaerz, M.; Fritzer, J.; Zhang, S.; Carter, B. A.; Zhang, K.

    2013-12-01

    Navigation Satellite System (GNSS)-based radio occultation (RO) is a satellite remote sensing technique providing accurate profiles of the Earth's atmosphere for weather and climate applications. Above about 30 km altitude, however, statistical optimization is a critical process for initializing the RO bending angles in order to optimize the climate monitoring utility of the retrieved atmospheric profiles. Here we introduce an advanced dynamic statistical optimization algorithm, which uses bending angles from multiple days of European Centre for Medium-range Weather Forecasts (ECMWF) short-range forecast and analysis fields, together with averaged-observed bending angles, to obtain background profiles and associated error covariance matrices with geographically varying background uncertainty estimates on a daily updated basis. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.4 (OPSv5.4) algorithm, using several days of simulated MetOp and observed CHAMP and COSMIC data, for January and July conditions. We find the following for the new method's performance compared to OPSv5.4: 1.) it significantly reduces random errors (standard deviations), down to about half their size, and leaves less or about equal residual systematic errors (biases) in the optimized bending angles; 2.) the dynamic (daily) estimate of the background error correlation matrix alone already improves the optimized bending angles; 3.) the subsequently retrieved refractivity profiles and atmospheric (temperature) profiles benefit by improved error characteristics, especially above about 30 km. Based on these encouraging results, we work to employ similar dynamic error covariance estimation also for the observed bending angles and to apply the method to full months and subsequently to entire climate data records.

  13. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    NASA Astrophysics Data System (ADS)

    Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.

    2010-06-01

    A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test) which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel) and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD) of the full fields in order to drastically reduce their dimensionality. POD is

  14. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    -and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.

  15. An optimization approach for design of RC beams subjected to flexural and shear effects

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2013-10-01

    A random search technique (RST) is proposed for the optimum design of reinforced concrete (RC) beams with minimum material cost. Cross-sectional dimensions and reinforcement bars are optimized for different flexural moments and shear forces. The optimization of reinforcement bars includes number and diameter of longitudinal bars for flexural moments. Also, stirrup reinforcements are designed for shear forces. The optimization is performed according to design procedure given in ACI-318 (Building Code Requirements for Structural Concrete). The approach is effective for the detailed design of RC beams ensuring safety and application conditions.

  16. Time-optimal three-axis reorientation of asymmetric rigid spacecraft via homotopic approach

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-05-01

    This paper investigates the time-optimal rest-to-rest three-axis reorientation of asymmetric rigid spacecraft. First, time-optimal solutions for the inertially symmetric rigid spacecraft (ISRS) three-axis reorientation are briefly reviewed. By utilizing initial costates and reorientation time of the ISRS time-optimal solution, the homotopic approach is introduced to solve the asymmetric rigid spacecraft time-optimal three-axis reorientation problem. The main merit is that the homotopic approach can start automatically and reliably, which would facilitate the real-time generation of open-loop time-optimal solutions for attitude slewing maneuvers. Finally, numerical examples are given to illustrate the performance of the proposed method. For principle axis reorientation, numerical results and analytical derivations show that, multiple time-optimal solutions exist and relations between them are given. For generic reorientation problem, though mathematical rigorous proof is not available to date, numerical results also indicated the existing of multiple time-optimal solutions.

  17. Optimal design of experiments applied to headspace solid phase microextraction for the quantification of vicinal diketones in beer through gas chromatography-mass spectrometric detection.

    PubMed

    Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C

    2015-08-05

    Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1).

  18. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue

  19. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved.

  20. Novel approach of fragment-based lead discovery applied to renin inhibitors.

    PubMed

    Tawada, Michiko; Suzuki, Shinkichi; Imaeda, Yasuhiro; Oki, Hideyuki; Snell, Gyorgy; Behnke, Craig A; Kondo, Mitsuyo; Tarui, Naoki; Tanaka, Toshimasa; Kuroita, Takanobu; Tomimoto, Masaki

    2016-11-15

    A novel approach was conducted for fragment-based lead discovery and applied to renin inhibitors. The biochemical screening of a fragment library against renin provided the hit fragment which showed a characteristic interaction pattern with the target protein. The hit fragment bound only to the S1, S3, and S3(SP) (S3 subpocket) sites without any interactions with the catalytic aspartate residues (Asp32 and Asp215 (pepsin numbering)). Prior to making chemical modifications to the hit fragment, we first identified its essential binding sites by utilizing the hit fragment's substructures. Second, we created a new and smaller scaffold, which better occupied the identified essential S3 and S3(SP) sites, by utilizing library synthesis with high-throughput chemistry. We then revisited the S1 site and efficiently explored a good building block attaching to the scaffold with library synthesis. In the library syntheses, the binding modes of each pivotal compound were determined and confirmed by X-ray crystallography and the library was strategically designed by structure-based computational approach not only to obtain a more active compound but also to obtain informative Structure Activity Relationship (SAR). As a result, we obtained a lead compound offering synthetic accessibility as well as the improved in vitro ADMET profiles. The fragments and compounds possessing a characteristic interaction pattern provided new structural insights into renin's active site and the potential to create a new generation of renin inhibitors. In addition, we demonstrated our FBDD strategy integrating highly sensitive biochemical assay, X-ray crystallography, and high-throughput synthesis and in silico library design aimed at fragment morphing at the initial stage was effective to elucidate a pocket profile and a promising lead compound.

  1. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.

    2012-04-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface

  2. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T. M.; Sigloch, K.

    2011-12-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and engineering structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to analyze ultrasonic waveforms measured at the surface of Plexiglas and rock samples, and to define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).

  3. A scalar optimization approach for averaged Hausdorff approximations of the Pareto front

    NASA Astrophysics Data System (ADS)

    Schütze, Oliver; Domínguez-Medina, Christian; Cruz-Cortés, Nareli; Gerardo de la Fraga, Luis; Sun, Jian-Qiao; Toscano, Gregorio; Landa, Ricardo

    2016-09-01

    This article presents a novel method to compute averaged Hausdorff (?) approximations of the Pareto fronts of multi-objective optimization problems. The underlying idea is to utilize directly the scalar optimization problem that is induced by the ? performance indicator. This method can be viewed as a certain set based scalarization approach and can be addressed both by mathematical programming techniques and evolutionary algorithms (EAs). In this work, the focus is on the latter where a first single objective EA for such ? approximations is proposed. Finally, the strength of the novel approach is demonstrated on some bi-objective benchmark problems with different shapes of the Pareto front.

  4. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  5. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning.

    PubMed

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-11-07

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (> 95%), target homogeneity (< 10% overdosing and < 7% underdosing) and organ sparing using at least one of the two models.

  6. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    NASA Technical Reports Server (NTRS)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  7. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Okumura, Teppei; Desjacques, Vincent E-mail: seljak@physik.uzh.ch E-mail: Vincent.Desjacques@unige.ch

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  8. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts

    NASA Astrophysics Data System (ADS)

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-01

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  9. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts.

    PubMed

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-15

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  10. Applying patient centered approach in management of pulmonary tuberculosis: A case report from Malaysia.

    PubMed

    Atif, M; Sulaiman, Sas; Shafi, Aa; Muttalif, Ar; Ali, I; Saleem, F

    2011-06-01

    A 24 year university student with history of productive cough was registered as sputum smear confirmed case of pulmonary tuberculosis. During treatment, patient suffered from itchiness associated with anti tuberculosis drugs and was treated with chlorpheniramine (4mg) tablet. Patient missed twenty eight doses of anti tuberculosis drugs in continuation phase claiming that he was very busy in his studies and assignments. Upon questioning he further explained that he was quite healthy after five months and unable to concentrate on his studies after taking prescribed medicines. His treatment was stopped based on clinical improvement, although he did not complete six months therapy. Two major reasons; false perception of being completely cured and side effects associated with anti TB drugs might be responsible for non adherence. Non sedative anti histamines like fexofenadine, citrizine or loratidine should be preferred over first generation anti histamines (chlorpheniramine) in patients with such lifestyle. Patient had not completed full course of chemotherapy, which is preliminary requirement for a case to be classified as "cure" and "treatment completed". Moreover, patient had not defaulted for two consecutive months. Therefore, according to WHO treatment outcome categories, this patient can neither be classified as "cure" or "treatment completed" nor as "defaulter". Further elaboration of WHO treatment outcome categories is required for adequate classification of patients with similar characteristics. Likelihood of non adherence can be significantly reduced by applying the WHO recommended "Patient Centered Approach" strategy. Close friend, class mate or family member can be selected as treatment supporter to ensure adherence to treatment.

  11. Old concepts, new molecules and current approaches applied to the bacterial nucleotide signalling field

    PubMed Central

    2016-01-01

    Signalling nucleotides are key molecules that help bacteria to rapidly coordinate cellular pathways and adapt to changes in their environment. During the past 10 years, the nucleotide signalling field has seen much excitement, as several new signalling nucleotides have been discovered in both eukaryotic and bacterial cells. The fields have since advanced quickly, aided by the development of important tools such as the synthesis of modified nucleotides, which, combined with sensitive mass spectrometry methods, allowed for the rapid identification of specific receptor proteins along with other novel genome-wide screening methods. In this review, we describe the principle concepts of nucleotide signalling networks and summarize the recent work that led to the discovery of the novel signalling nucleotides. We also highlight current approaches applied to the research in the field as well as resources and methodological advances aiding in a rapid identification of nucleotide-specific receptor proteins. This article is part of the themed issue ‘The new bacteriology’. PMID:27672152

  12. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  13. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  14. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet

  15. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension.

    PubMed

    Terekhov, Alexander V; Pesin, Yakov B; Niu, Xun; Latash, Mark L; Zatsiorsky, Vladimir M

    2010-09-01

    We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.

  16. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension

    PubMed Central

    Pesin, Yakov B.; Niu, Xun; Latash, Mark L.

    2010-01-01

    We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms. PMID:19902213

  17. A heuristic optimization approach for Air Quality Monitoring Network design with the simultaneous consideration of multiple pollutants.

    PubMed

    Elkamel, A; Fatehifar, E; Taheri, M; Al-Rashidi, M S; Lohi, A

    2008-08-01

    An interactive optimization methodology for allocating the number and configuration of an Air Quality Monitoring Network (AQMN) in a vast area to identify the impact of multiple pollutants is described. A mathematical model based on the multiple cell approach (MCA) was used to create monthly spatial distributions for the concentrations of the pollutants emitted from different emission sources. These spatial temporal patterns were subject to a heuristic optimization algorithm to identify the optimal configuration of a monitoring network. The objective of the optimization is to provide maximum information about multi-pollutants (i.e., CO, NO(x) and SO(2)) emitted from each source within a given area. The model was applied to a network of existing refinery stacks and the results indicate that three stations can provide a total coverage of more than 70%. In addition, the effect of the spatial correlation coefficient (R(C)) on total area coverage was analyzed. The modeling results show that as the cutoff correlation coefficient R(C) is increased from 0.75 to 0.95, the number of monitoring stations required for total coverage is increased. A high R(C) based network may not necessarily cover the entire region, but the covered region will be well represented. A low R(C) based network, on the other hand, would offer more coverage of the region, but the covered region may not be satisfactorily represented.

  18. Non-linear global optimization via parameterization and inverse function approximation: an artificial neural networks approach.

    PubMed

    Mayorga, René V; Arriaga, Mariano

    2007-10-01

    In this article, a novel technique for non-linear global optimization is presented. The main goal is to find the optimal global solution of non-linear problems avoiding sub-optimal local solutions or inflection points. The proposed technique is based on a two steps concept: properly keep decreasing the value of the objective function, and calculating the corresponding independent variables by approximating its inverse function. The decreasing process can continue even after reaching local minima and, in general, the algorithm stops when converging to solutions near the global minimum. The implementation of the proposed technique by conventional numerical methods may require a considerable computational effort on the approximation of the inverse function. Thus, here a novel Artificial Neural Network (ANN) approach is implemented to reduce the computational requirements of the proposed optimization technique. This approach is successfully tested on some highly non-linear functions possessing several local minima. The results obtained demonstrate that the proposed approach compares favorably over some current conventional numerical (Matlab functions) methods, and other non-conventional (Evolutionary Algorithms, Simulated Annealing) optimization methods.

  19. A multi-objective approach to the design of low thrust space trajectories using optimal control

    NASA Astrophysics Data System (ADS)

    Dellnitz, Michael; Ober-Blöbaum, Sina; Post, Marcus; Schütze, Oliver; Thiere, Bianca

    2009-11-01

    In this article, we introduce a novel three-step approach for solving optimal control problems in space mission design. We demonstrate its potential by the example task of sending a group of spacecraft to a specific Earth L 2 halo orbit. In each of the three steps we make use of recently developed optimization methods and the result of one step serves as input data for the subsequent one. Firstly, we perform a global and multi-objective optimization on a restricted class of control functions. The solutions of this problem are (Pareto-)optimal with respect to Δ V and flight time. Based on the solution set, a compromise trajectory can be chosen suited to the mission goals. In the second step, this selected trajectory serves as initial guess for a direct local optimization. We construct a trajectory using a more flexible control law and, hence, the obtained solutions are improved with respect to control effort. Finally, we consider the improved result as a reference trajectory for a formation flight task and compute trajectories for several spacecraft such that these arrive at the halo orbit in a prescribed relative configuration. The strong points of our three-step approach are that the challenging design of good initial guesses is handled numerically by the global optimization tool and afterwards, the last two steps only have to be performed for one reference trajectory.

  20. Multivariate optimization of molecularly imprinted polymer solid-phase extraction applied to parathion determination in different water samples.

    PubMed

    Alizadeh, Taher; Ganjali, Mohammad Reza; Nourozi, Parviz; Zare, Mashaalah

    2009-04-13

    In this work a parathion selective molecularly imprinted polymer was synthesized and applied as a high selective adsorber material for parathion extraction and determination in aqueous samples. The method was based on the sorption of parathion in the MIP according to simple batch procedure, followed by desorption by using methanol and measurement with square wave voltammetry. Plackett-Burman and Box-Behnken designs were used for optimizing the solid-phase extraction, in order to enhance the recovery percent and improve the pre-concentration factor. By using the screening design, the effect of six various factors on the extraction recovery was investigated. These factors were: pH, stirring rate (rpm), sample volume (V(1)), eluent volume (V(2)), organic solvent content of the sample (org%) and extraction time (t). The response surface design was carried out considering three main factors of (V(2)), (V(1)) and (org%) which were found to be main effects. The mathematical model for the recovery percent was obtained as a function of the mentioned main effects. Finally the main effects were adjusted according to the defined desirability function. It was found that the recovery percents more than 95% could be easily obtained by using the optimized method. By using the experimental conditions, obtained in the optimization step, the method allowed parathion selective determination in the linear dynamic range of 0.20-467.4 microg L(-1), with detection limit of 49.0 ng L(-1) and R.S.D. of 5.7% (n=5). Parathion content of water samples were successfully analyzed when evaluating potentialities of the developed procedure.

  1. Central Composite Design (CCD) applied for statistical optimization of glucose and sucrose binary carbon mixture in enhancing the denitrification process

    NASA Astrophysics Data System (ADS)

    Lim, Jun-Wei; Beh, Hoe-Guan; Ching, Dennis Ling Chuan; Ho, Yeek-Chia; Baloo, Lavania; Bashir, Mohammed J. K.; Wee, Seng-Kew

    2016-12-01

    The present study provides an insight into the optimization of a glucose and sucrose mixture to enhance the denitrification process. Central Composite Design was applied to design the batch experiments with the factors of glucose and sucrose measured as carbon-to-nitrogen (C:N) ratio each and the response of percentage removal of nitrate-nitrogen (NO3 --N). Results showed that the polynomial regression model of NO3 --N removal had been successfully derived, capable of describing the interactive relationships of glucose and sucrose mixture that influenced the denitrification process. Furthermore, the presence of glucose was noticed to have more consequential effect on NO3 --N removal as opposed to sucrose. The optimum carbon sources mixture to achieve complete removal of NO3 --N required lesser glucose (C:N ratio of 1.0:1.0) than sucrose (C:N ratio of 2.4:1.0). At the optimum glucose and sucrose mixture, the activated sludge showed faster acclimation towards glucose used to perform the denitrification process. Later upon the acclimation with sucrose, the glucose uptake rate by the activated sludge abated. Therefore, it is vital to optimize the added carbon sources mixture to ensure the rapid and complete removal of NO3 --N via the denitrification process.

  2. A new approach of analyzing time-varying dynamical equation via an optimal principle

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Xiao, Jinghua; Yang, Yixian; Li, Ang

    2017-03-01

    In this paper, an innovative design approach is proposed to solve time-varying dynamical equation, including matrix inverse equation and Sylvester equation. Based on the precondition of the existing solution of time-varying dynamical equation, different from previous approach to solve unknown matrix, an optimal design principle is used to solve the unknown variables. A performance index is introduced based on the inherent properties of the time-varying dynamical equation and Euler equation. The solution of time-varying dynamical equation is converted to an optimal problem of performance index. Furthermore, convergence and sensitivity to additive noise are also analyzed, and simulation results confirm that the method is feasible and effective. Especially, in simulations we design a tunable positive parameter in the dynamic optimization model. The tunable parameter is not only helpful to accelerate its convergence but also reduce its sensitivity to additive noise. Meanwhile the comparative simulation results are shown for the convergence accuracy and robustness of this method.

  3. Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach

    PubMed Central

    Li, Kang; Wu, Xiaodong; Chen, Danny Z.; Sonka, Milan

    2008-01-01

    Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s-t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. PMID:16402624

  4. An effective approach for obtaining optimal sampling windows for population pharmacokinetic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2009-01-01

    This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.

  5. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  6. Network-Based Approach to Optimize Personnel Recovery for the Joint Force

    DTIC Science & Technology

    2011-05-26

    NUMBER Andrew M . Smith , Major, USAF 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...NETWORK-BASED APPROACH TO OPTIMIZE PERSONNEL RECOVERY FOR THE JOINT FORCE by Andrew M . Smith Major, USAF A paper submitted to the

  7. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  8. Heart Rate assessment by means of a novel approach applied to signals of different nature

    NASA Astrophysics Data System (ADS)

    Cosoli, G.; Casacanditella, L.; Tomasini, EP; Scalise, L.

    2017-01-01

    Electrocardiographic (ECG) signal presents many clinically relevant features (e.g. QT-interval, that is the duration of the ventricular depolarization). A novel processing technique has been demonstrated to be capable to measure some important characteristics according to the morphology of the waveform. Basing on that, the aim of this work is to propose an improved algorithm and to prove its efficacy in the assessment of the subject’s Heart Rate (HR) in comparison to standard algorithms (i.e. Pan & Tompkins). Results obtained in experimentally collected ECG signals for the identification of the main feature (R-peak) are comparable to those obtained with the traditional approach (sensitivity of 99.55% and 99.95%, respectively). Moreover, the use of this algorithm has been broaden to signals coming from different biomedical sensors (based on optical, acoustical and mechanical principles), all related to blood flow, for the computation of HR. In particular, it has been employed to PCG (Phonocardiography), PPG (Photoplethysmography) and VCG (Vibrocardiography), where standard algorithms could not be widely applied. HR results from a measurement campaign on 8 healthy subjects have shown, with respect to ECG, deviations (calculated as 2σ) of ±3.3 bpm, ±2.3 bpm and ±1.5 bpm for PCG, PPG and VCG, respectively. In conclusion, it is possible to state that the adopted algorithm is able to measure HR accurately from different biosignals. Future work will involve the extraction of additional morphological features in order to characterise the waveforms more deeply and to better describe the subject’s health status.

  9. A new approach for investigating venom function applied to venom calreticulin in a parasitoid wasp

    PubMed Central

    Siebert, Aisha L.; Wheeler, David; Werren, John H.

    2015-01-01

    A new method is developed to investigate functions of venom components, using venom gene RNA interference knockdown in the venomous animal coupled with RNA sequencing in the envenomated host animal. The vRNAi/eRNA-Seq approach is applied to the venom calreticulin component (v-crc) of the parasitoid wasp Nasonia vitripennis. Parasitoids are common, venomous animals that inject venom proteins into host insects, where they modulate physiology and metabolism to produce a better food resource for the parasitoid larvae. vRNAi/eRNA-Seq indicates that v-crc acts to suppress expression of innate immune cell response, enhance expression of clotting genes in the host, and up-regulate cuticle genes. V-crc KD also results in an increased melanization reaction immediately following envenomation. We propose that v-crc inhibits innate immune response to parasitoid venom and reduces host bleeding during adult and larval parasitoid feeding. Experiments do not support the hypothesis that v-crc is required for the developmental arrest phenotype observed in envenomated hosts. We propose that an important role for some venom components is to reduce (modulate) the exaggerated effects of other venom components on target host gene expression, physiology, and survival, and term this venom mitigation. A model is developed that uses vRNAi/eRNA-Seq to quantify the contribution of individual venom components to total venom phenotypes, and to define different categories of mitigation by individual venoms on host gene expression. Mitigating functions likely contribute to the diversity of venom proteins in parasitoids and other venomous organisms. PMID:26359852

  10. Considerations in applying the general equilibrium approach to environmental health assessment.

    PubMed

    Wan, Yue; Yang, Hong-Wei; Masui, Toshihiko

    2005-10-01

    There are currently two commonly used approaches to assessing economic impacts of health damage resulting from environmental pollution: human capital approach (HCA) and willingness-to-pay (WTP). WTP can be further divided into averted expenditure approach (AEA), hedonic wage approach (HWA), contingent valuation approach (CVA) and hedonic price approach (HPA). A general review of the principles behind these approaches by the authors indicates that these methods are incapable of unveiling the mechanism of health impact from the point of view of national economy. On a basis of economic system, the shocks brought about by health effects of environmental pollution change the labor supply and medical expenditure, which in turn affects the level of production activity in each sector and the total final consumption pattern of the society. The general equilibrium approach within the framework of macroeconomic theory is able to estimate the health impact on national economy comprehensively and objectively. Its mechanism and applicability are discussed in detail by the authors.

  11. An optimal control model approach to the design of compensators for simulator delay

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Caglayan, A.

    1982-01-01

    The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.

  12. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  13. Optimizing Clinical Drug Product Performance: Applying Biopharmaceutics Risk Assessment Roadmap (BioRAM) and the BioRAM Scoring Grid.

    PubMed

    Dickinson, Paul A; Kesisoglou, Filippos; Flanagan, Talia; Martinez, Marilyn N; Mistry, Hitesh B; Crison, John R; Polli, James E; Cruañes, Maria T; Serajuddin, Abu T M; Müllertz, Anette; Cook, Jack A; Selen, Arzu

    2016-11-01

    The aim of Biopharmaceutics Risk Assessment Roadmap (BioRAM) and the BioRAM Scoring Grid is to facilitate optimization of clinical performance of drug products. BioRAM strategy relies on therapy-driven drug delivery and follows an integrated systems approach for formulating and addressing critical questions and decision-making (J Pharm Sci. 2014,103(11): 3777-97). In BioRAM, risk is defined as not achieving the intended in vivo drug product performance, and success is assessed by time to decision-making and action. Emphasis on time to decision-making and time to action highlights the value of well-formulated critical questions and well-designed and conducted integrated studies. This commentary describes and illustrates application of the BioRAM Scoring Grid, a companion to the BioRAM strategy, which guides implementation of such an integrated strategy encompassing 12 critical areas and 6 assessment stages. Application of the BioRAM Scoring Grid is illustrated using published literature. Organizational considerations for implementing BioRAM strategy, including the interactions, function, and skillsets of the BioRAM group members, are also reviewed. As a creative and innovative systems approach, we believe that BioRAM is going to have a broad-reaching impact, influencing drug development and leading to unique collaborations influencing how we learn, and leverage and share knowledge.

  14. Mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer

    NASA Astrophysics Data System (ADS)

    Tanaka, Mirai; Yamashita, Takashi; Sano, Natsuki; Ishigaki, Aya; Suzuki, Tomomichi

    2017-01-01

    The convolution of a series of events is often observed for a variety of phenomena such as the oscillation of a string. A photochemical reaction of a molecule is characterized by a time constant, but materials in the real world contain several molecules with different time constants. Therefore, the kinetics of photochemical reactions of the materials are usually observed with a complexity comparable with those of theoretical kinetic equations. Analysis of the components of the kinetics is quite important for the development of advanced materials. However, with a limited number of exceptions, deconvolution of the observed kinetics has not yet been mathematically solved. In this study, we propose a mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer. In the proposed approach, time-series data of absorbances are acquired and an estimate of the quantum yield distribution is obtained. To estimate the distribution, we solve a mathematical optimization problem to minimize the difference between the input data and a model. This optimization problem involves a differential equation constrained on a functional space as the variable lies in the space of probability distribution functions and the constraints arise from reaction rate equations. This problem can be reformulated as a convex quadratic optimization problem and can be efficiently solved by discretization. Numerical results are also reported here, and they verify the effectiveness of our approach.

  15. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  16. Static optimization of muscle forces during gait in comparison to EMG-to-force processing approach.

    PubMed

    Heintz, Sofia; Gutierrez-Farewik, Elena M

    2007-07-01

    Individual muscle forces evaluated from experimental motion analysis may be useful in mathematical simulation, but require additional musculoskeletal and mathematical modelling. A numerical method of static optimization was used in this study to evaluate muscular forces during gait. The numerical algorithm used was built on the basis of traditional optimization techniques, i.e., constrained minimization technique using the Lagrange multiplier method to solve for constraints. Measuring exact muscle forces during gait analysis is not currently possible. The developed optimization method calculates optimal forces during gait, given a specific performance criterion, using kinematics and kinetics from gait analysis together with muscle architectural data. Experimental methods to validate mathematical methods to calculate forces are limited. Electromyography (EMG) is frequently used as a tool to determine muscle activation in experimental studies on human motion. A method of estimating force from the EMG signal, the EMG-to-force approach, was recently developed by Bogey et al. [Bogey RA, Perry J, Gitter AJ. An EMG-to-force processing approach for determining ankle muscle forcs during normal human gait. IEEE Trans Neural Syst Rehabil Eng 2005;13:302-10] and is based on normalization of activation during a maximum voluntary contraction to documented maximal muscle strength. This method was adapted in this study as a tool with which to compare static optimization during a gait cycle. Muscle forces from static optimization and from EMG-to-force muscle forces show reasonably good correlation in the plantarflexor and dorsiflexor muscles, but less correlation in the knee flexor and extensor muscles. Additional comparison of the mathematical muscle forces from static optimization to documented averaged EMG data reveals good overall correlation to patterns of evaluated muscular activation. This indicates that on an individual level, muscular force patterns from mathematical

  17. A multidating approach applied to historical slackwater flood deposits of the Gardon River, SE France

    NASA Astrophysics Data System (ADS)

    Dezileau, L.; Terrier, B.; Berger, J. F.; Blanchemanche, P.; Latapie, A.; Freydier, R.; Bremond, L.; Paquier, A.; Lang, M.; Delgado, J. L.

    2014-06-01

    A multidating approach was carried out on slackwater flood deposits, preserved in valley side rock cave and terrace, of the Gardon River in Languedoc, southeast France. Lead-210, caesium-137, and geochemical analysis of mining-contaminated slackwater flood sediments have been used to reconstruct the history of these flood deposits. These age controls were combined with the continuous record of Gardon flow since 1890, and the combined records were then used to assign ages to slackwater deposits. The stratigraphic records of terrace GE and cave GG were excellent examples to illustrate the effects of erosion/preservation in a context of a progressively self-censoring, vertically accreting sequence. The sedimentary flood record of the terrace GE located at 10 m above the channel bed is complete for years post-1958 but incomplete before. During the 78-year period 1880-1958, 25 floods of a sufficient magnitude (> 1450 m3/s) have covered the terrace. Since 1958, however, the frequency of inundation of the deposits has been lower: only 5 or 6 floods in 52 years have been large enough to exceed the necessary threshold discharge (> 1700 m3/s). The progressive increase of the threshold discharge and the reduced frequency of inundation at the terrace could allow stabilization of the vegetation cover and improve protection against erosion from subsequent large magnitude flood events. The sedimentary flood record seems complete for cave GG located at 15 m above the channel bed. Here, the low frequency of events would have enabled a high degree of stabilization of the sedimentary flood record, rendering the deposits less susceptible to erosion. Radiocarbon dating is used in this study and compared to the other dating techniques. Eighty percent of radiocarbon dates on charcoals were considerably older than those obtained by the other techniques in the terrace. On the other hand, radiocarbon dating on seeds provided better results. This discrepancy between radiocarbon dates on

  18. A Computer-Assisted Approach for Conducting Information Technology Applied Instructions

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Hwang, Gwo-Jen; Tsai, Pei Jin; Yang, Tzu-Chi

    2009-01-01

    The growing popularity of computer and network technologies has attracted researchers to investigate the strategies and the effects of information technology applied instructions. Previous research has not only demonstrated the benefits of applying information technologies to the learning process, but has also revealed the difficulty of applying…

  19. An efficient approach to optimize the vibration mode of bar-type ultrasonic motors.

    PubMed

    Zhu, Hua; Li, Zhirong; Zhao, Chunsheng

    2010-04-01

    The electromechanical coupled dynamic model of the stator of the bar-type ultrasonic motor is derived based on the finite element method. The dynamical behavior of the stator is analyzed via this model and the theoretical result agrees with the experimental result of the stator of the prototype motor very well. Both the structural design principles and the approaches to meet the requirements for the mode of the stator are discussed. Based on the pattern search algorithm, an optimal model to meet the design requirements is established. The numerical simulation results show that this optimal model is effective for the structural design of the stator.

  20. Approach for optimization of the color rendering index of light mixtures.

    PubMed

    Lin, Ku Chin

    2010-07-01

    The general CIE color rendering index (CRI) of light is an important index to evaluate the quality of illumination. However, because of the complexity in measurement of the rendering ability under designated constraints, an approach for general mathematical formulation and global optimization of the rendering ability of light emitting diode (LED) light mixtures is difficult to develop. This study is mainly devoted to developing mathematical formulation and a numerical method for the CRI optimization. The method is developed based on the so-called complex method [Computer J.8, 42 (1965); G. V. Reklaitis et al., Engineering Optimization-Methods and Applications (Wiley, 1983)] with modifications. It is first applicable to 3-color light mixtures and then extended to a hierarchical and iterative structure for higher-order light mixtures. The optimization is studied under the constraints of bounded relative intensities of the light mixture, designated correlated color temperature (CCT), and the required approximate white of the light mixture. The problems of inconsistent constraints and solutions are addressed. The CRI is a complicated function of the relative intensities of the compound illuminators of the mixture. The proposed method requires taking no derivatives of the function and is very adequate for the optimization. This is demonstrated by simulation for RGBW LED light mixtures. The results show that global and unique convergence to the optimal within required tolerances for CRI and spatial dispersivity is always achieved.

  1. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    NASA Astrophysics Data System (ADS)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  2. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  3. Flood frequency analysis using multi-objective optimization based interval estimation approach

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K. S.; He, Jianxun; Tay, Joo-Hwa

    2017-02-01

    Flood frequency analysis (FFA) is a necessary tool for water resources management and water infrastructure design. Owing to the existence of variability in sample representation, distribution selection, and distribution parameter estimation, flood quantile estimation is subjected to various levels of uncertainty, which is not negligible and avoidable. Hence, alternative methods to the conventional approach of FFA are desired for quantifying the uncertainty such as in the form of prediction interval. The primary focus of the paper was to develop a novel approach to quantify and optimize the prediction interval resulted from the non-stationarity of data set, which is reflected in the distribution parameters estimated, in FFA. This paper proposed the combination of the multi-objective optimization approach and the ensemble simulation technique to determine the optimal perturbations of distribution parameters for constructing the prediction interval of flood quantiles in FFA. To demonstrate the proposed approach, annual maximum daily flow data collected from two gauge stations on the Bow River, Alberta, Canada, were used. The results suggest that the proposed method can successfully capture the uncertainty in quantile estimates qualitatively using the prediction interval, as the number of observations falling within the constructed prediction interval is approximately maximized while the prediction interval is minimized.

  4. A Simulation/Optimization approach to manage groundwater resources in the Gaza aquifer (Palestinian Territories) under climate change conditions

    NASA Astrophysics Data System (ADS)

    Dentoni, Marta; Qahman, Khalid; Deidda, Roberto; Paniconi, Claudio; Lecca, Giuditta

    2013-04-01

    The Gaza aquifer is the main source of water for agricultural, domestic, and industrial uses in the Gaza Strip. The rapid increase on water demand due to continuous population growth has led to water scarcity and contamination by seawater intrusion (SWI). Furthermore, current projections of future climatic conditions (IPCC, 2007) point to potential decreases in available water, both inflows and outflows. A numerical assessment of SWI in the Gaza coastal aquifer under climate induced changes has been carried out by means of the CODESA-3D model of density-dependent variably saturated flow and salt transport in groundwaters. After integrating available data on climatology, geology, geomorphology, hydrology, hydrogeology, soil use, and groundwater exploitation relative to the period 1935-2010, the calibrated and validated model was used to simulate the response of the hydrological basin to actual and future scenarios of climate change obtained from different regional circulation models. The results clearly show that, if current pumping rates are maintained, seawater intrusion will worsen. To manage sustainable aquifer development under effective recharge operations and water quality constraints, a decision support system based on a simulation/optimization (S/O) approach was applied to the Gaza study site. The S/O approach is based on coupling the CODESA-3D model with the Carroll's Genetic Algorithm Driver. The optimization model incorporates two conflicting objectives using a penalty method: maximizing pumping rates from the aquifer wells while limiting the salinity of the water withdrawn. The resulting coastal aquifer management model was applied over a 30-year time period to identify the optimum spatial distribution of pumping rates at the control wells. The optimized solution provides for a general increase in water table levels and a decrease in the total extracted salt mass while keeping total abstraction rates relatively constant, with reference to non-optimized

  5. A variational approach to ecological-type optimization criteria for finite-time thermal engine models

    NASA Astrophysics Data System (ADS)

    Angulo-Brown, F.; Ares de Parga, G.; Arias-Hernández, L. A.

    2002-05-01

    In this paper we apply variational calculus procedures for the optimization of a Curzon-Ahlborn thermal cycle under the so-called modified ecological criterion. Our result for the optimum efficiency is the same that Velasco et al (2000 J. Phys. D 33 355) previously obtained by means of the method of the saving functions. Besides, we show that both the saving functions and the modified ecological criteria are equivalent.

  6. Approach to optimization of low-power Stirling cryocoolers. Final report

    SciTech Connect

    Sullivan, D.B.; Radebaugh, R.; Daney, D.E.; Zimmerman, J.E.

    1983-01-01

    The authors describe a method for optimizing the design (shape of the displacer) of low-power Stirling cryocoolers relative to the power required to operate the systems. A variational calculation which includes static conduction, shuttle, and radiation losses, as well as regenerator inefficiency, has been completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped-displacer cryocoolers indicates reasonable agreement.

  7. Ant Colony Optimization Approaches to Clustering of Lung Nodules from CT Images

    PubMed Central

    Gopalakrishnan, Ravichandran C.; Kuppusamy, Veerakumar

    2014-01-01

    Lung cancer is becoming a threat to mankind. Applying machine learning algorithms for detection and segmentation of irregular shaped lung nodules remains a remarkable milestone in CT scan image analysis research. In this paper, we apply ACO algorithm for lung nodule detection. We have compared the performance against three other algorithms, namely, Otsu algorithm, watershed algorithm, and global region based segmentation. In addition, we suggest a novel approach which involves variations of ACO, namely, refined ACO, logical ACO, and variant ACO. Variant ACO shows better reduction in false positives. In addition we propose black circular neighborhood approach to detect nodule centers from the edge detected image. Genetic algorithm based clustering is performed to cluster the nodules based on intensity, shape, and size. The performance of the overall approach is compared with hierarchical clustering to establish the improvisation in the proposed approach. PMID:25525455

  8. Systematic analysis of protein-detergent complexes applying dynamic light scattering to optimize solutions for crystallization trials.

    PubMed

    Meyer, Arne; Dierks, Karsten; Hussein, Rana; Brillet, Karl; Brognaro, Hevila; Betzel, Christian

    2015-01-01

    Detergents are widely used for the isolation and solubilization of membrane proteins to support crystallization and structure determination. Detergents are amphiphilic molecules that form micelles once the characteristic critical micelle concentration (CMC) is achieved and can solubilize membrane proteins by the formation of micelles around them. The results are presented of a study of micelle formation observed by in situ dynamic light-scattering (DLS) analyses performed on selected detergent solutions using a newly designed advanced hardware device. DLS was initially applied in situ to detergent samples with a total volume of approximately 2 µl. When measured with DLS, pure detergents show a monodisperse radial distribution in water at concentrations exceeding the CMC. A series of all-trans n-alkyl-β-D-maltopyranosides, from n-hexyl to n-tetradecyl, were used in the investigations. The results obtained verify that the application of DLS in situ is capable of distinguishing differences in the hydrodynamic radii of micelles formed by detergents differing in length by only a single CH2 group in their aliphatic tails. Subsequently, DLS was applied to investigate the distribution of hydrodynamic radii of membrane proteins and selected water-insoluble proteins in presence of detergent micelles. The results confirm that stable protein-detergent complexes were prepared for (i) bacteriorhodopsin and (ii) FetA in complex with a ligand as examples of transmembrane proteins. A fusion of maltose-binding protein and the Duck hepatitis B virus X protein was added to this investigation as an example of a non-membrane-associated protein with low water solubility. The increased solubility of this protein in the presence of detergent could be monitored, as well as the progress of proteolytic cleavage to separate the fusion partners. This study demonstrates the potential of in situ DLS to optimize solutions of protein-detergent complexes for crystallization applications.

  9. ALOHA: a novel probability fusion approach for scoring multi-parameter drug-likeness during the lead optimization stage of drug discovery

    NASA Astrophysics Data System (ADS)

    Debe, Derek A.; Mamidipaka, Ravindra B.; Gregg, Robert J.; Metz, James T.; Gupta, Rishi R.; Muchmore, Steven W.

    2013-09-01

    Automated lead optimization helper application (ALOHA) is a novel fitness scoring approach for small molecule lead optimization. ALOHA employs a series of generalized Bayesian models trained from public and proprietary pharmacokinetic, absorption, distribution, metabolism, and excretion, and toxicology data to determine regions of chemical space that are likely to have excellent drug-like properties. The input to ALOHA is a list of molecules, and the output is a set of individual probabilities as well as an overall probability that each of the molecules will pass a panel of user selected assays. In addition to providing a summary of how and when to apply ALOHA, this paper will discuss the validation of ALOHA's Bayesian models and probability fusion approach. Most notably, ALOHA is demonstrated to discriminate between members of the same chemical series with strong statistical significance, suggesting that ALOHA can be used effectively to select compound candidates for synthesis and progression at the lead optimization stage of drug discovery.

  10. Bridging the Gap between Basic and Applied Research by an Integrative Research Approach

    ERIC Educational Resources Information Center

    Stark, Robin; Mandl, Heinz

    2007-01-01

    The discussion of the gap between theory and practice has a long tradition in educational psychology and especially in research on learning and instruction. Starting with a short analysis of more or less elaborated approaches focusing on this problem, a complex procedure called "integrative research approach", specialized in reducing the gap…

  11. Developing and Applying Green Building Technology in an Indigenous Community: An Engaged Approach to Sustainability Education

    ERIC Educational Resources Information Center

    Riley, David R.; Thatcher, Corinne E.; Workman, Elizabeth A.

    2006-01-01

    Purpose: This paper aims to disseminate an innovative approach to sustainability education in construction-related fields in which teaching, research, and service are integrated to provide a unique learning experience for undergraduate students, faculty members, and community partners. Design/methodology/approach: The paper identifies the need for…

  12. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  13. A holistic approach towards optimal planning of hybrid renewable energy systems: Combining hydroelectric and wind energy

    NASA Astrophysics Data System (ADS)

    Dimas, Panagiotis; Bouziotas, Dimitris; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2014-05-01

    Hydropower with pumped storage is a proven technology with very high efficiency that offers a unique large-scale energy buffer. Energy storage is employed by pumping water upstream to take advantage of the excess of produced energy (e.g. during night) and next retrieving this water to generate hydro-power during demand peaks. Excess energy occurs due to other renewables (wind, solar) whose power fluctuates in an uncontrollable manner. By integrating these with hydroelectric plants with pumped storage facilities we can form autonomous hybrid renewable energy systems. The optimal planning and management thereof requires a holistic approach, where uncertainty is properly represented. In this context, a novel framework is proposed, based on stochastic simulation and optimization. This is tested in an existing hydrosystem of Greece, considering its combined operation with a hypothetical wind power system, for which we seek the optimal design to ensure the most beneficial performance of the overall scheme.

  14. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  15. Modeling, simulation and optimization approaches for design of lightweight car body structures

    NASA Astrophysics Data System (ADS)

    Kiani, Morteza

    Simulation-based design optimization and finite element method are used in this research to investigate weight reduction of car body structures made of metallic and composite materials under different design criteria. Besides crashworthiness in full frontal, offset frontal, and side impact scenarios, vibration frequencies, static stiffness, and joint rigidity are also considered. Energy absorption at the component level is used to study the effectiveness of carbon fiber reinforced polymer (CFRP) composite material with consideration of different failure criteria. A global-local design strategy is introduced and applied to multi-objective optimization of car body structures with CFRP components. Multiple example problems involving the analysis of full-vehicle crash and body-in-white models are used to examine the effect of material substitution and the choice of design criteria on weight reduction. The results of this study show that car body structures that are optimized for crashworthiness alone may not meet the vibration criterion. Moreover, optimized car body structures with CFRP components can be lighter with superior crashworthiness than the baseline and optimized metallic structures.

  16. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  17. Pectin extraction from quince (Cydonia oblonga) pomace applying alternative methods: effect of process variables and preliminary optimization.

    PubMed

    Brown, Valeria Anahí; Lozano, Jorge E; Genovese, Diego Bautista

    2014-03-01

    The objectives of this study were to introduce alternative methods in the process of pectin extraction from quince pomace, to determine the effect of selected process variables (factors) on the obtained pectin, and to perform a preliminary optimization of the process. A fractional factorial experimental design was applied, where the factors considered were six: quince pomace pretreatment (washing vs blanching), drying method (hot air vs LPSSD), acid extraction conditions (pH, temperature, and time), and pectin extract concentration method (vacuum evaporation vs ultrafiltration). The effects of these factors and their interactions on pectin yield (Y: 0.2-34.2 mg/g), GalA content (44.5-76.2%), and DM (47.5-90.9%), were determined. For these three responses, extraction pH was the main effect, but it was involved in two and three factors interactions. Regarding alternative methods, LPSSD was required for maximum Y and GalA, and ultrafiltration for maximum GalA and DM. Response models were used to predict optimum process conditions (quince blanching, pomace drying by LPSSD, acid extraction at pH 2.20, 80 , 3 h, and concentration under vacuum) to simultaneously maximize Y (25.2 mg/g), GalA (66.3%), and DM (66.4%).

  18. Exploring the Dynamics of Policy Interaction: Feedback among and Impacts from Multiple, Concurrently Applied Policy Approaches for Promoting Collaboration

    ERIC Educational Resources Information Center

    Fuller, Boyd W.; Vu, Khuong Minh

    2011-01-01

    The prisoner's dilemma and stag hunt games, as well as the apparent benefits of collaboration, have motivated governments to promote more frequent and effective collaboration through a variety of policy approaches. Sometimes, multiple kinds of policies are applied concurrently, and yet little is understood about how these policies might interact…

  19. Characteristics of Computational Thinking about the Estimation of the Students in Mathematics Classroom Applying Lesson Study and Open Approach

    ERIC Educational Resources Information Center

    Promraksa, Siwarak; Sangaroon, Kiat; Inprasitha, Maitree

    2014-01-01

    The objectives of this research were to study and analyze the characteristics of computational thinking about the estimation of the students in mathematics classroom applying lesson study and open approach. Members of target group included 4th grade students of 2011 academic year of Choomchon Banchonnabot School. The Lesson plan used for data…

  20. Fuel moisture content estimation: a land-surface modelling approach applied to African savannas

    NASA Astrophysics Data System (ADS)

    Ghent, D.; Spessa, A.; Kaduk, J.; Balzter, H.

    2009-04-01

    Despite the importance of fire to the global climate system, in terms of emissions from biomass burning, ecosystem structure and function, and changes to surface albedo, current land-surface models do not adequately estimate key variables affecting fire ignition and propagation. Fuel moisture content (FMC) is considered one of the most important of these variables (Chuvieco et al., 2004). Biophysical models, with appropriate plant functional type parameterisations, are the most viable option to adequately predict FMC over continental scales at high temporal resolution. However, the complexity of plant-water interactions, and the variability associated with short-term climate changes, means it is one of the most difficult fire variables to quantify and predict. Our work attempts to resolve this issue using a combination of satellite data and biophysical modelling applied to Africa. The approach we take is to represent live FMC as a surface dryness index; expressed as the ratio between the Normalised Difference Vegetation Index (NDVI) and land-surface temperature (LST). It has been argued in previous studies (Sandholt et al., 2002; Snyder et al., 2006), that this ratio displays a statistically stronger correlation to FMC than either of the variables, considered separately. In this study, simulated FMC is constrained through the assimilation of remotely sensed LST and NDVI data into the land-surface model JULES (Joint-UK Land Environment Simulator). Previous modelling studies of fire activity in Africa savannas, such as Lehsten et al. (2008), have reported significant levels of uncertainty associated with the simulations. This uncertainty is important because African savannas are among some of the most frequently burnt ecosystems and are a major source of greenhouse trace gases and aerosol emissions (Scholes et al., 1996). Furthermore, regional climate model studies indicate that many parts of the African savannas will experience drier and warmer conditions in future

  1. Utilizing Electronic Health Record Information to Optimize Medication Infusion Devices: A Manual Data Integration Approach.

    PubMed

    Chuk, Amanda; Maloney, Robert; Gawron, Joyce; Skinner, Colin

    Health information technology is increasingly utilized within healthcare delivery systems today. Two examples of this type of technology include the capture of patient-specific information within an electronic health record and intravenous medication infusion devices equipped with dose error reduction software known as drug libraries. Automatic integration of these systems, termed intravenous (IV) interoperability, should serve as the goal toward which all healthcare systems work to maximize patient safety. For institutions lacking IV interoperability, we describe a manual approach of querying the electronic health record to incorporate medication administration information with data from infusion device software to optimize drug library settings. This approach serves to maximize utilization of available information to optimize medication safety provided by drug library software.

  2. Utilizing Electronic Health Record Information to Optimize Medication Infusion Devices: A Manual Data Integration Approach.

    PubMed

    Chuk, Amanda; Maloney, Robert; Gawron, Joyce; Skinner, Colin

    2015-05-23

    Health information technology is increasingly utilized within healthcare delivery systems today. Two examples of this type of technology include the capture of patient-specific information within an electronic health record and intravenous medication infusion devices equipped with dose error reduction software known as drug libraries. Automatic integration of these systems, termed intravenous (IV) interoperability, should serve as the goal toward which all healthcare systems work to maximize patient safety. For institutions lacking IV interoperability, we describe a manual approach of querying the electronic health record to incorporate medication administration information with data from infusion device software to optimize drug library settings. This approach serves to maximize utilization of available information to optimize medication safety provided by drug library software.

  3. A homotopy approach for combined control-structure optimization - Constructive analysis and numerical examples

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.; Milman, M. H.; Salama, M.; Bruno, R.; Gibson, J. S.

    1990-01-01

    This paper outlines the development of methods for the combined control-structure optimization of physical systems encountered in the technology of large space structures. The objectives of the approach taken in this paper is not to produce the 'best' optimized design, but rather to efficiently produce a family of design options so as to assist in early trade studies, typically before hard design constraints are imposed. The philosophy is that these are candidate designs to be passed on for further considerations, and their function is more to guide the development of the system design rather than to represent the ultimate product. A homotopy approach involving multi-objective functions is developed for this purpose. Analytical and numerical examples are also presented.

  4. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  5. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  6. Surface Roughness Optimization of Polyamide-6/Nanoclay Nanocomposites Using Artificial Neural Network: Genetic Algorithm Approach

    PubMed Central

    Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  7. Nanocarriers for optimizing the balance between interfollicular permeation and follicular uptake of topically applied clobetasol to minimize adverse effects.

    PubMed

    Mathes, C; Melero, A; Conrad, P; Vogt, T; Rigo, L; Selzer, D; Prado, W A; De Rossi, C; Garrigues, T M; Hansen, S; Guterres, S S; Pohlmann, A R; Beck, R C R; Lehr, C-M; Schaefer, U F

    2016-02-10

    The treatment of various hair disorders has become a central focus of good dermatologic patient care as it affects men and women all over the world. For many inflammatory-based scalp diseases, glucocorticoids are an essential part of treatment, even though they are known to cause systemic as well as local adverse effects when applied topically. Therefore, efficient targeting and avoidance of these side effects are of utmost importance. Optimizing the balance between drug release, interfollicular permeation, and follicular uptake may allow minimizing these adverse events and simultaneously improve drug delivery, given that one succeeds in targeting a sustained release formulation to the hair follicle. To test this hypothesis, three types of polymeric nanocarriers (nanospheres, nanocapsules, lipid-core nanocapsules) for the potent glucocorticoid clobetasol propionate (CP) were prepared. They all exhibited a sustained release of drug, as was desired. The particles were formulated as a dispersion and hydrogel and (partially) labeled with Rhodamin B for quantification purposes. Follicular uptake was investigated using the Differential Stripping method and was found highest for nanocapsules in dispersion after application of massage. Moreover, the active ingredient (CP) as well as the nanocarrier (Rhodamin B labeled polymer) recovered in the hair follicle were measured simultaneously, revealing an equivalent uptake of both. In contrast, only negligible amounts of CP could be detected in the hair follicle when applied as free drug in solution or hydrogel, regardless of any massage. Skin permeation experiments using heat-separated human epidermis mounted in Franz Diffusion cells revealed equivalent reduced transdermal permeability for all nanocarriers in comparison to application of the free drug. Combining these results, nanocapsules formulated as an aqueous dispersion and applied by massage appeare to be a good candidate to maximize follicular targeting and minimize drug

  8. Benefits of collaborative learning for environmental management: applying the integrated systems for knowledge management approach to support animal pest control.

    PubMed

    Allen, W; Bosch, O; Kilvington, M; Oliver, J; Gilbert, M

    2001-02-01

    Resource management issues continually change over time in response to coevolving social, economic, and ecological systems. Under these conditions adaptive management, or "learning by doing," offers an opportunity for more proactive and collaborative approaches to resolving environmental problems. In turn, this will require the implementation of learning-based extension approaches alongside more traditional linear technology transfer approaches within the area of environmental extension. In this paper the Integrated Systems for Knowledge Management (ISKM) approach is presented to illustrate how such learning-based approaches can be used to help communities develop, apply, and refine technical information within a larger context of shared understanding. To outline how this works in practice, we use a case study involving pest management. Particular attention is paid to the issues that emerge as a result of multiple stakeholder involvement within environmental problem situations. Finally, the potential role of the Internet in supporting and disseminating the experience gained through ongoing adaptive management processes is examined.

  9. Investigation of Multi-Criteria Decision Consistency: A Triplex Approach to Optimal Oilfield Portfolio Investment Decisions

    NASA Astrophysics Data System (ADS)

    Qaradaghi, Mohammed

    techniques that can provide more flexibility and inclusiveness in the decision making process, such as Multi-Criteria Decision Making (MCDM) methods. However, it can be observed that the MCDM literature: 1) is primarily focused on suggesting certain MCDM techniques to specific problems without providing sufficient evidence for their selection, 2) is inadequate in addressing MCDM in E&P portfolio selection and prioritization compared with other fields, and 3) does not address prioritizing brownfields (i.e., developed oilfields). This research study aims at addressing the above drawbacks through combining three MCDM methods (i.e., AHP, PROMETHEE and TOPSIS) into a single decision making tool that can support optimal oilfield portfolio investment decisions by helping determine the share of each oilfield of the total development resources allocated. Selecting these methods is reinforced by a pre-deployment and post-deployment validation framework. In addition, this study proposes a two-dimensional consistency test to verify the output coherence or prioritization stability of the MCDM methods in comparison with an intuitive approach. Nine scenarios representing all possible outcomes of the internal and external consistency tests are further proposed to reach a conclusion. The methodology is applied to a case study of six major oilfields in Iraq to generate percentage shares of each oilfield of a total production target that is in line with Iraq's aspiration to increase oil production. However, the methodology is intended to be applicable to other E&P portfolio investment prioritization scenarios by taking the specific contextual characteristics into consideration.

  10. Comparison of penalty functions on a penalty approach to mixed-integer optimization

    NASA Astrophysics Data System (ADS)

    Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2016-06-01

    In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.

  11. An Optimization Approach to Coexistence of Bluetooth and Wi-Fi Networks Operating in ISM Environment

    NASA Astrophysics Data System (ADS)

    Klajbor, Tomasz; Rak, Jacek; Wozniak, Jozef

    Unlicensed ISM band is used by various wireless technologies. Therefore, issues related to ensuring the required efficiency and quality of operation of coexisting networks become essential. The paper addresses the problem of mutual interferences between IEEE 802.11b transmitters (commercially named Wi-Fi) and Bluetooth (BT) devices.An optimization approach to modeling the topology of BT scatternets is introduced, resulting in more efficient utilization of ISM environment consisting of BT and Wi-Fi networks. To achieve it, the Integer Linear Programming approach has been proposed. Example results presented in the paper illustrate significant benefits of using the proposed modeling strategy.

  12. CNS Multiparameter Optimization Approach: Is it in Accordance with Occam's Razor Principle?

    PubMed

    Raevsky, Oleg A

    2016-04-01

    A detailed analysis of the possibility of using the Multiparameter Optimization approach (MPO) for CNS/non-CNS classification of drugs was carried out. This work has shown that MPO descriptors are able to describe only part of chemical transport in the CNS connected with transmembrane diffusion. Hence the "intuitive" CNS MPO approach with arbitrary selection of descriptors and calculations of score functions, search of thresholds of classification, and absence of any chemometric procedures, leads to rather modest accuracy of CNS/non-CNS classification models.

  13. An optimal control approach to pilot/vehicle analysis and Neal-Smith criteria

    NASA Technical Reports Server (NTRS)

    Bacon, B. J.; Schmidt, D. K.

    1984-01-01

    The approach of Neal and Smith was merged with the advances in pilot modeling by means of optimal control techniques. While confirming the findings of Neal and Smith, a methodology that explicitly includes the pilot's objective in attitude tracking was developed. More importantly, the method yields the required system bandwidth along with a better pilot model directly applicable to closed-loop analysis of systems in any order.

  14. A Novel Synthesis of Computational Approaches Enables Optimization of Grasp Quality of Tendon-Driven Hands

    PubMed Central

    Inouye, Joshua M.; Kutch, Jason J.; Valero-Cuevas, Francisco J.

    2013-01-01

    We propose a complete methodology to find the full set of feasible grasp wrenches and the corresponding wrench-direction-independent grasp quality for a tendon-driven hand with arbitrary design parameters. Monte Carlo simulations on two representative designs combined with multiple linear regression identified the parameters with the greatest potential to increase this grasp metric. This synthesis of computational approaches now enables the systematic design, evaluation, and optimization of tendon-driven hands. PMID:23335864

  15. A Novel Synthesis of Computational Approaches Enables Optimization of Grasp Quality of Tendon-Driven Hands.

    PubMed

    Inouye, Joshua M; Kutch, Jason J; Valero-Cuevas, Francisco J

    2012-08-01

    We propose a complete methodology to find the full set of feasible grasp wrenches and the corresponding wrench-direction-independent grasp quality for a tendon-driven hand with arbitrary design parameters. Monte Carlo simulations on two representative designs combined with multiple linear regression identified the parameters with the greatest potential to increase this grasp metric. This synthesis of computational approaches now enables the systematic design, evaluation, and optimization of tendon-driven hands.

  16. Comparison of approaches based on optimization and algebraic iteration for binary tomography

    NASA Astrophysics Data System (ADS)

    Cai, Weiwei; Ma, Lin

    2010-12-01

    Binary tomography represents a special category of tomographic problems, in which only two values are possible for the sought image pixels. The binary nature of the problems can potentially lead to a significant reduction in the number of view angles required for a satisfactory reconstruction, thusly enabling many interesting applications. However, the limited view angles result in a severely underdetermined system of equations, which is challenging to solve. Various approaches have been proposed to address such a challenge, and two categories of approaches include those based on optimization and those based on algebraic iteration. However, the relative strengths, limitations, and applicable ranges of these approaches have not been clearly defined in the past. Therefore, it is the main objective of this work to conduct a systematic comparison of approaches from each category. This comparison suggested that the approaches based on algebraic iteration offered both superior reconstruction fidelity and computation efficiency at low (two or three) view angles, and these advantages diminished at high view angles. Meanwhile, this work also investigated the application of regularization techniques, the selection of optimal regularization parameter, and the use of a local search technique for binary problems. We expect the results and conclusions reported in this work to provide valuable guidance for the design and development of algorithms for binary tomography problems.

  17. Optimizing a four-props support using the integrative design approach

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Foit, K.; Banaś, W.; Sękala, A.; Monica, Z.; Topolska, S.

    2016-08-01

    Modern approach to the design process of technical means requires taking into consideration the issues concerning the integration of different sources of data and knowledge, and various methodologies of design. Thus, the importance of integrative approach is growing. The integration itself could be understood as a link between these different methodological solutions. Another problem is the issue concerning the optimization of technical means because of the range of design requirements. The presented issues are the basis for design approach that uses integrative approach as the basis for constructional optimization of designed technical mean. It bases firstly on the concept of integration three main subsystems of a technical mean: structural one, drive one and control one. Secondly it includes the integration of three aspects of a construction: its geometrical characteristics, its material characteristics and its assembly characteristics. One of areas of utilization of the proposed integrative approach to designing process is elaborating the construction of mining mechanized supports. There different systems of mining support characterizing by different sets of advantages and disadvantages. The importance of the design process is considered with the working conditions of mining supports that are closely linked with geological characteristics of mined beds. A mining mechanized support could be treated as the logical union of three mentioned constructional subsystems. The structural one includes among others canopy, rear shield and foot pieces. The drive one includes hydraulic props and its equipment. Finally the control one include the system of hydraulic valves ant their parameters.

  18. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  19. Private pediatric neuropsychology practice multimodal treatment of ADHD: an applied approach.

    PubMed

    Beljan, Paul; Bree, Kathleen D; Reuter, Alison E F; Reuter, Scott D; Wingers, Laura

    2014-01-01

    As neuropsychologists and psychologists specializing in the assessment and treatment of pediatric mental health concerns, one of the most prominent diagnoses we encounter is attention-deficit hyperactivity disorder (ADHD). Following a pediatric neuropsychological evaluation, parents often request recommendations for treatment. This article addresses our approach to the treatment of ADHD from the private practice perspective. We will review our primary treatment methodology as well as integrative and alternative treatment approaches.

  20. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.