Science.gov

Sample records for optimization approach applied

  1. Homotopic approach and pseudospectral method applied jointly to low thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Guo, Tieding; Jiang, Fanghua; Li, Junfeng

    2012-02-01

    The homotopic approach and the pseudospectral method are two popular techniques for low thrust trajectory optimization. A hybrid scheme is proposed in this paper by combining the above two together to cope with various difficulties encountered when they are applied separately. Explicitly, a smooth energy-optimal problem is first discretized by the pseudospectral method, leading to a nonlinear programming problem (NLP). Costates, especially their initial values, are then estimated from Karush-Kuhn-Tucker (KKT) multipliers of this NLP. Based upon these estimated initial costates, homotopic procedures are initiated efficiently and the desirable non-smooth fuel-optimal results are finally obtained by continuing the smooth energy-optimal results through a homotopic algorithm. Two main difficulties, one due to absence of reasonable initial costates when the homotopic procedures are being initiated and the other due to discontinuous bang-bang controls when the pseudospectral method is applied to the fuel-optimal problem, are both resolved successfully. Numerical results of two scenarios are presented in the end, demonstrating feasibility and well performance of this hybrid technique.

  2. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  3. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  4. Macronutrient modifications of optimal foraging theory: an approach using indifference curves applied to some modern foragers

    SciTech Connect

    Hill, K.

    1988-06-01

    The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.

  5. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    SciTech Connect

    Asplund, Erik; Kluener, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ({Dirac_h}/2{pi})=m{sub e}=e=a{sub 0}= 1, have been used unless otherwise stated.

  6. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  7. Dynamic programming in applied optimization problems

    NASA Astrophysics Data System (ADS)

    Zavalishchin, Dmitry

    2015-11-01

    Features of the use dynamic programming in applied problems are investigated. In practice such problems as finding the critical paths in network planning and control, finding the optimal supply plan in transportation problem, objects territorial distribution are traditionally solved by special methods of operations research. It should be noted that the dynamic programming is not provided computational advantages, but facilitates changes and modifications of tasks. This follows from the Bellman's optimality principle. The features of the multistage decision processes construction in applied problems are provided.

  8. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  9. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  10. Applying optimization software libraries to engineering problems

    NASA Technical Reports Server (NTRS)

    Healy, M. J.

    1984-01-01

    Nonlinear programming, preliminary design problems, performance simulation problems trajectory optimization, flight computer optimization, and linear least squares problems are among the topics covered. The nonlinear programming applications encountered in a large aerospace company are a real challenge to those who provide mathematical software libraries and consultation services. Typical applications include preliminary design studies, data fitting and filtering, jet engine simulations, control system analysis, and trajectory optimization and optimal control. Problem sizes range from single-variable unconstrained minimization to constrained problems with highly nonlinear functions and hundreds of variables. Most of the applications can be posed as nonlinearly constrained minimization problems. Highly complex optimization problems with many variables were formulated in the early days of computing. At the time, many problems had to be reformulated or bypassed entirely, and solution methods often relied on problem-specific strategies. Problems with more than ten variables usually went unsolved.

  11. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  12. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  13. A General Approach to Error Estimation and Optimized Experiment Design, Applied to Multislice Imaging of T1in Human Brain at 4.1 T

    NASA Astrophysics Data System (ADS)

    Mason, Graeme F.; Chu, Wen-Jang; Hetherington, Hoby P.

    1997-05-01

    In this report, a procedure to optimize inversion-recovery times, in order to minimize the uncertainty in the measuredT1from 2-point multislice images of the human brain at 4.1 T, is discussed. The 2-point, 40-slice measurement employed inversion-recovery delays chosen based on the minimization of noise-based uncertainties. For comparison of the measuredT1values and uncertainties, 10-point, 3-slice measurements were also acquired. The measuredT1values using the 2-point method were 814, 1361, and 3386 ms for white matter, gray matter, and cerebral spinal fluid, respectively, in agreement with the respectiveT1values of 817, 1329, and 3320 ms obtained using the 10-point measurement. The 2-point, 40-slice method was used to determine theT1in the cortical gray matter, cerebellar gray matter, caudate nucleus, cerebral peduncle, globus pallidus, colliculus, lenticular nucleus, base of the pons, substantia nigra, thalamus, white matter, corpus callosum, and internal capsule.

  14. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  15. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    SciTech Connect

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  16. Optimal quantisation applied to digital holographic data

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2005-06-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realised, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantisation. We adapt the optimal signal quantisation technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry.

  17. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  18. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  19. Perturbation approach applied to modal diffraction methods.

    PubMed

    Bischoff, Joerg; Hehl, Karl

    2011-05-01

    Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698

  20. Variable-complexity optimization applied to airfoil design

    NASA Astrophysics Data System (ADS)

    Thokala, Praveen; Martins, Joaquim R. R. A.

    2007-04-01

    Variable-complexity methods are applied to aerodynamic shape design problems with the objective of reducing the total computational cost of the optimization process. Two main strategies are employed: the use of different levels of fidelity in the analysis models (variable fidelity) and the use of different sets of design variables (variable parameterization). Variable-fidelity methods with three different types of corrections are implemented and applied to a set of two-dimensional airfoil optimization problems that use computational fluid dynamics for the analysis. Variable parameterization is also used to solve the same problems. Both strategies are shown to reduce the computational cost of the optimization.

  1. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  2. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field. PMID:20492109

  3. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  4. Optimal Statistical Approach to Optoacoustic Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Zhulina, Yulia V.

    2000-11-01

    An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.

  5. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  6. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model

    PubMed Central

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  7. A Multiple Approach to Evaluating Applied Academics.

    ERIC Educational Resources Information Center

    Wang, Changhua; Owens, Thomas

    The Boeing Company is involved in partnerships with Washington state schools in the area of applied academics. Over the past 3 years, Boeing offered grants to 57 high schools to implement applied mathematics, applied communication, and principles of technology courses. Part 1 of this paper gives an overview of applied academics by examining what…

  8. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  9. Bayesian approach to global discrete optimization

    SciTech Connect

    Mockus, J.; Mockus, A.; Mockus, L.

    1994-12-31

    We discuss advantages and disadvantages of the Bayesian approach (average case analysis). We present the portable interactive version of software for continuous global optimization. We consider practical multidimensional problems of continuous global optimization, such as optimization of VLSI yield, optimization of composite laminates, estimation of unknown parameters of bilinear time series. We extend Bayesian approach to discrete optimization. We regard the discrete optimization as a multi-stage decision problem. We assume that there exists some simple heuristic function which roughly predicts the consequences of the decisions. We suppose randomized decisions. We define the probability of the decision by the randomized decision function depending on heuristics. We fix this function with exception of some parameters. We repeat the randomized decision several times at the fixed values of those parameters and accept the best decision as the result. We optimize the parameters of the randomized decision function to make the search more efficient. Thus we reduce the discrete optimization problem to the continuous problem of global stochastic optimization. We solve this problem by the Bayesian methods of continuous global optimization. We describe the applications to some well known An problems of discrete programming, such as knapsack, traveling salesman, and scheduling.

  10. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  11. Applying SF-Based Genre Approaches to English Writing Class

    ERIC Educational Resources Information Center

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  12. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  13. BEM4I applied to shape optimization problems

    NASA Astrophysics Data System (ADS)

    Zapletal, Jan; Merta, Michal; Čermák, Martin

    2016-06-01

    Shape optimization problems are one of the areas where the boundary element method can be applied efficiently. We present the application of the BEM4I library developed at IT4Innovations to a class of free surface Bernoulli problems in 3D. Apart from the boundary integral formulation of the related state and adjoint boundary value problems we present an implementation of a general scheme for the treatment of similar problems.

  14. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  15. Nonuniform covering method as applied to multicriteria optimization problems with guaranteed accuracy

    NASA Astrophysics Data System (ADS)

    Evtushenko, Yu. G.; Posypkin, M. A.

    2013-02-01

    The nonuniform covering method is applied to multicriteria optimization problems. The ɛ-Pareto set is defined, and its properties are examined. An algorithm for constructing an ɛ-Pareto set with guaranteed accuracy ɛ is described. The efficiency of implementing this approach is discussed, and numerical results are presented.

  16. Applying a gaming approach to IP strategy.

    PubMed

    Gasnier, Arnaud; Vandamme, Luc

    2010-02-01

    Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies. PMID:20127561

  17. Applying simulation to optimize plastic molded optical parts

    NASA Astrophysics Data System (ADS)

    Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris

    2012-10-01

    Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.

  18. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  19. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  20. Mathematical Modelling: A New Approach to Teaching Applied Mathematics.

    ERIC Educational Resources Information Center

    Burghes, D. N.; Borrie, M. S.

    1979-01-01

    Describes the advantages of mathematical modeling approach in teaching applied mathematics and gives many suggestions for suitable material which illustrates the links between real problems and mathematics. (GA)

  1. Pitfalls and optimal approaches to diagnose melioidosis.

    PubMed

    Kingsley, Paul Vijay; Arunkumar, Govindakarnavar; Tipre, Meghan; Leader, Mark; Sathiakumar, Nalini

    2016-06-01

    Melioidosis is a severe and fatal infectious disease in the tropics and subtropics. It presents as a febrile illness with protean manifestation ranging from chronic localized infection to acute fulminant septicemia with dissemination of infection to multiple organs characterized by abscesses. Pneumonia is the most common clinical presentation. Because of the wide range of clinical presentations, physicians may often misdiagnose and mistreat the disease for tuberculosis, pneumonia or other pyogenic infections. The purpose of this paper is to present common pitfalls in diagnosis and provide optimal approaches to enable early diagnosis and prompt treatment of melioidosis. Melioidosis may occur beyond the boundaries of endemic areas. There is no pathognomonic feature specific to a diagnosis of melioidosis. In endemic areas, physicians need to expand the diagnostic work-up to include melioidosis when confronted with clinical scenarios of pyrexia of unknown origin, progressive pneumonia or sepsis. Radiological imaging is an integral part of the diagnostic workup. Knowledge of the modes of transmission and risk factors will add support in clinically suspected cases to initiate therapy. In situations of clinically highly probable or possible cases where laboratory bacteriological confirmation is not possible, applying evidence-based criteria and empirical treatment with antimicrobials is recommended. It is of prime importance that patients undergo the full course of antimicrobial therapy to avoid relapse and recurrence. Early diagnosis and appropriate management is crucial in reducing serious complications leading to high mortality, and in preventing recurrences of the disease. Thus, there is a crucial need for promoting awareness among physicians at all levels and for improved diagnostic microbiology services. Further, the need for making the disease notifiable and/or initiating melioidosis registries in endemic countries appears to be compelling. PMID:27262061

  2. A Multidisciplinary Optimization Platform Applied to Steel Constructions

    NASA Astrophysics Data System (ADS)

    Benanane, Abdelkader; Caperaa, Serge; Said Bekkouche, M.; Kerdal, Djamel

    The design of the complex objects such as the buildings was always organized in levels of design since the preliminary phases to the final phases. If the approach by levels allows the designers to practise a precise and progressive definition to objects, it does not allow leading nevertheless to an optimal design of the projects. Consequently, we propose in this study, an original modelling allowing representing effectively all the process of the multidisciplinary design by the exchange of textual files (technical data and knowledge) between the different disciplines of the civil engineering (geotechnical studies, reinforced concrete and structural studies). We use for the loops of optimization the Monte-Carlo method because of its great robustness since it is based on the use of random numbers and the statistical tools, because it also puts up with any form of a function objective and allows to hold account easily constraints of optimization. The studies of test cases carried out on simple structures emphasize very significant and very promising variations as well as the dimensioning that the global cost.

  3. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  4. Multiobjective genetic approach for optimal control of photoinduced processes

    SciTech Connect

    Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique

    2007-08-15

    We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.

  5. Preconcentration modeling for the optimization of a micro gas preconcentrator applied to environmental monitoring.

    PubMed

    Camara, Malick; Breuil, Philippe; Briand, Danick; Viricelle, Jean-Paul; Pijolat, Christophe; de Rooij, Nico F

    2015-04-21

    This paper presents the optimization of a micro gas preconcentrator (μ-GP) system applied to atmospheric pollution monitoring, with the help of a complete modeling of the preconcentration cycle. Two different approaches based on kinetic equations are used to illustrate the behavior of the micro gas preconcentrator for given experimental conditions. The need for high adsorption flow and heating rate and for low desorption flow and detection volume is demonstrated in this paper. Preliminary to this optimization, the preconcentration factor is discussed and a definition is proposed. PMID:25810264

  6. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  7. A Bayesian approach to optimizing cryopreservation protocols

    PubMed Central

    2015-01-01

    Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach. PMID:26131379

  8. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  9. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  10. Optimization of a Solar Photovoltaic Applied to Greenhouses

    NASA Astrophysics Data System (ADS)

    Nakoul, Z.; Bibi-Triki, N.; Kherrous, A.; Bessenouci, M. Z.; Khelladi, S.

    The global energy consumption and in our country is increasing. The bulk of world energy comes from fossil fuels, whose reserves are doomed to exhaustion and are the leading cause of pollution and global warming through the greenhouse effect. This is not the case of renewable energy that are inexhaustible and from natural phenomena. For years, unanimously, solar energy is in the first rank of renewable energies .The study of energetic aspect of a solar power plant is the best way to find the optimum of its performances. The study on land with real dimensions requires a long time and therefore is very costly, and more results are not always generalizable. To avoid these drawbacks we opted for a planned study on computer only, using the software 'Matlab' by modeling different components for a better sizing and simulating all energies to optimize profitability taking into account the cost. The result of our work applied to sites of Tlemcen and Bouzareah led us to conclude that the energy required is a determining factor in the choice of components of a PV solar power plant.

  11. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  12. A Simulation Optimization Approach to Epidemic Forecasting.

    PubMed

    Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  13. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  14. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  15. Mixed finite element formulation applied to shape optimization

    NASA Technical Reports Server (NTRS)

    Rodrigues, Helder; Taylor, John E.; Kikuchi, Noboru

    1988-01-01

    The development presented introduces a general form of mixed formulation for the optimal shape design problem. The associated optimality conditions are easily obtained without resorting to highly elaborate mathematical developments. Also, the physical significance of the adjoint problem is clearly defined with this formulation.

  16. Optimality of collective choices: a stochastic approach.

    PubMed

    Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L

    2003-09-01

    Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251

  17. Optimized perturbation theory applied to factorization scheme dependence

    NASA Astrophysics Data System (ADS)

    Stevenson, P. M.; Politzer, H. David

    We reconsider the application of the "optimization" procedure to the problem of factorization scheme dependence in finite-order QCD calculations. The main difficulty encountered in a previous analysis disappears once an algebraic error is corrected.

  18. An Optimal Guidance Law Applied to Quadrotor Using LQR Method

    NASA Astrophysics Data System (ADS)

    Jafari, Hamidreza; Zareh, Mehran; Roshanian, Jafar; Nikkhah, Amirali

    The optimal guidance law of an autonomous four-rotor helicopter, called the Quadrotor, using linear quadratic regulators (LQR) is presented in this paper. The dynamic equations of the Quadrotor are considered nonlinear so to find an LQR controller, it is necessary that these equations be linearized in different operation points. Due to importance of energy consumption in Quadrotors, minimum energy is selected as the optimal criteria.

  19. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  20. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  1. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  2. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  3. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  4. Applying a Constructivist and Collaborative Methodological Approach in Engineering Education

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Gonzalez, Carina; Castilla, Ivan; Gonzalez, Evelio; Sigut, Jose

    2007-01-01

    In this paper, a methodological educational proposal based on constructivism and collaborative learning theories is described. The suggested approach has been successfully applied to a subject entitled "Computer Architecture and Engineering" in a Computer Science degree in the University of La Laguna in Spain. This methodology is supported by two…

  5. Focus Groups: A Practical and Applied Research Approach for Counselors

    ERIC Educational Resources Information Center

    Kress, Victoria E.; Shoffner, Marie F.

    2007-01-01

    Focus groups are becoming a popular research approach that counselors can use as an efficient, practical, and applied method of gathering information to better serve clients. In this article, the authors describe focus groups and their potential usefulness to professional counselors and researchers. Practical implications related to the use of…

  6. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  7. Genetic algorithm parameter optimization: applied to sensor coverage

    NASA Astrophysics Data System (ADS)

    Sahin, Ferat; Abbate, Giuseppe

    2004-08-01

    Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.

  8. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  9. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  10. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  11. Applying Genetic Algorithms To Query Optimization in Document Retrieval.

    ERIC Educational Resources Information Center

    Horng, Jorng-Tzong; Yeh, Ching-Chang

    2000-01-01

    Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)

  12. A multiple objective optimization approach to aircraft control systems design

    NASA Technical Reports Server (NTRS)

    Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.

    1979-01-01

    The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.

  13. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    NASA Astrophysics Data System (ADS)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  14. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  15. Applying a weed risk assessment approach to GM crops.

    PubMed

    Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe

    2014-12-01

    Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants. PMID:24046097

  16. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  17. Robust Bayesian decision theory applied to optimal dosage.

    PubMed

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. PMID:15057878

  18. Applying riding-posture optimization on bicycle frame design.

    PubMed

    Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee

    2015-11-01

    Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle. PMID:26154206

  19. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  20. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  1. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  2. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  3. Radiobiological Optimization of Combination Radiopharmaceutical Therapy Applied to Myeloablative Treatment of Non-Hodgkin’s Lymphoma

    PubMed Central

    Hobbs, Robert F; Wahl, Richard L; Frey, Eric C; Kasamon, Yvette; Song, Hong; Huang, Peng; Jones, Richard J; Sgouros, George

    2014-01-01

    Combination treatment is a hallmark of cancer therapy. Although the rationale for combination radiopharmaceutical therapy was described in the mid ‘90s, such treatment strategies have only been implemented clinically recently, and without a rigorous methodology for treatment optimization. Radiobiological and quantitative imaging-based dosimetry tools are now available that enable rational implementation of combined targeted radiopharmaceutical therapy. Optimal implementation should simultaneously account for radiobiological normal organ tolerance while optimizing the ratio of two different radiopharmaceuticals required to maximize tumor control. We have developed such a methodology and applied it to hypothetical myeloablative treatment of non-hodgkin’s lymphoma (NHL) patients using 131I-tositumomab and 90Y-ibritumomab tiuxetan. Methods The range of potential administered activities (AA) is limited by the normal organ maximum tolerated biologic effective doses (MTBEDs) arising from the combined radiopharmaceuticals. Dose limiting normal organs are expected to be the lungs for 131I-tositumomab and the liver for 90Y-ibritumomab tiuxetan in myeloablative NHL treatment regimens. By plotting the limiting normal organ constraints as a function of the AAs and calculating tumor biological effective dose (BED) along the normal organ MTBED limits, the optimal combination of activities is obtained. The model was tested using previously acquired patient normal organ and tumor kinetic data and MTBED values taken from the literature. Results The average AA values based solely on normal organ constraints was (19.0 ± 8.2) GBq with a range of 3.9 – 36.9 GBq for 131I-tositumomab, and (2.77 ± 1.64) GBq with a range of 0.42 – 7.54 GBq for 90Y-ibritumomab tiuxetan. Tumor BED optimization results were calculated and plotted as a function of AA for 5 different cases, established using patient normal organ kinetics for the two radiopharmaceuticals. Results included AA ranges

  4. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  5. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. PMID:26827955

  6. MATERIAL SHAPE OPTIMIZATION FOR FIBER REINFORCED COMPOSITES APPLYING A DAMAGE FORMULATION

    NASA Astrophysics Data System (ADS)

    Kato, Junji; Ramm, Ekkehard; Terada, Kenjiro; Kyoya, Takashi

    The present contribution deals with an optimization strategy of fiber reinforced composites. Although the methodical concept is very general we concentrate on Fiber Reinforced Concrete with a complex failure mechanism resulting from material brittleness of both constituents matrix and fibers. The purpose of the present paper is to improve the structural ductility of the fiber reinforced composites applying an optimization method with respect to the geometrical layout of continuous long textile fibers. The method proposed is achieved by applying a so-called embedded reinforcement formulation. This methodology is extended to a damage formulation in order to represent a realistic structural behavior. For the optimization problem a gradient-based optimization scheme is assumed. An optimality criteria method is applied because of its numerically high efficiency and robustness. The performance of the method is demonstrated by a series of numerical examples; it is verified that the ductility can be substantially improved.

  7. Applying a Modified Triad Approach to Investigate Wastewater lines

    SciTech Connect

    Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.

    2006-07-01

    Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)

  8. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  9. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  10. Stevenson's optimized perturbation theory applied to factorization and mass scheme dependence

    NASA Astrophysics Data System (ADS)

    David Politzer, H.

    1982-01-01

    The principles of the optimized perturbation theory proposed by Stevenson to deal with coupling constant scheme dependence are applied to the problem of factorization scheme dependence in inclusive hadron reactions. Similar considerations allow the optimization of problems with mass dependence. A serious shortcoming of the procedure, common to all applications, is discussed.

  11. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  12. [Approaches to the optimization of medical services for the population].

    PubMed

    Babanov, S A

    2001-01-01

    Describes modern approaches to optimization of medical care of the population under conditions of finance deficiency. Expenditure cutting is evaluated from viewpoint of "proof" medicine (allotting finances for concrete patients and services). PMID:11515111

  13. Optimization approaches to volumetric modulated arc therapy planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-01

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed. PMID:25735291

  14. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  15. RF cavity design exploiting a new derivative-free trust region optimization approach

    PubMed Central

    Hassan, Abdel-Karim S.O.; Abdel-Malek, Hany L.; Mohamed, Ahmed S.A.; Abuelfadl, Tamer M.; Elqenawy, Ahmed E.

    2014-01-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  16. RF cavity design exploiting a new derivative-free trust region optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E

    2015-11-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  17. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  18. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  19. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  20. An efficient approach to obtain optimal load factors for structural design.

    PubMed

    Bojórquez, Juan; Ruiz, Sonia E

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  1. Annular flow optimization: A new integrated approach

    SciTech Connect

    Maglione, R.; Robotti, G.; Romagnoli, R.

    1997-07-01

    During the drilling stage of an oil and gas well the hydraulic circuit of the mud assumes great importance with respect to most of the numerous and various constituting parts (mostly in the annular sections). Each of them has some points to be satisfied in order to guarantee both the safety of the operations and the performance optimization of each of the single elements of the circuit. The most important tasks for the annular part of the drilling hydraulic circuit are the following: (1) Maximum available pressure to the last casing shoe; (2) avoid borehole wall erosions; and (3) guarantee the hole cleaning. A new integrated system considering all the elements of the annular part of the drilling hydraulic circuit and the constraints imposed from each of them has been realized. In this way the family of the flow parameters (mud rheology and pump rate) satisfying simultaneously all the variables of the annular section has been found. Finally two examples regarding a standard and narrow annular section (slim hole) will be reported, showing briefly all the steps of the calculations until reaching the optimum flow parameters family (for that operational condition of drilling) that satisfies simultaneous all the flow parameters limitations imposed by the elements of the annular section circuit.

  2. Optimization methods applied to the aerodynamic design of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Bingham, Gene J.; Riley, Michael F.

    1987-01-01

    Described is a formal optimization procedure for helicopter rotor blade design which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general-purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares design and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.

  3. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  4. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  5. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. PMID:24705545

  6. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  7. A data-intensive approach to mechanistic elucidation applied to chiral anion catalysis

    PubMed Central

    Milo, Anat; Neel, Andrew J.; Toste, F. Dean; Sigman, Matthew S.

    2015-01-01

    Knowledge of chemical reaction mechanisms can facilitate catalyst optimization, but extracting that knowledge from a complex system is often challenging. Here we present a data-intensive method for deriving and then predictively applying a mechanistic model of an enantioselective organic reaction. As a validating case study, we selected an intramolecular dehydrogenative C-N coupling reaction, catalyzed by chiral phosphoric acid derivatives, in which catalyst-substrate association involves weak, non-covalent interactions. Little was previously understood regarding the structural origin of enantioselectivity in this system. Catalyst and substrate substituent effects were probed by systematic physical organic trend analysis. Plausible interactions between the substrate and catalyst that govern enantioselectivity were identified and supported experimentally, indicating that such an approach can afford an efficient means of leveraging mechanistic insight to optimize catalyst design. PMID:25678656

  8. A simple approach for predicting time-optimal slew capability

    NASA Astrophysics Data System (ADS)

    King, Jeffery T.; Karpenko, Mark

    2016-03-01

    The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.

  9. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  10. Departures from optimality when pursuing multiple approach or avoidance goals.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record PMID:26963081

  11. A Synergistic Approach of Desirability Functions and Metaheuristic Strategy to Solve Multiple Response Optimization Problems

    NASA Astrophysics Data System (ADS)

    Bera, Sasadhar; Mukherjee, Indrajit

    2010-10-01

    Ensuring quality of a product is rarely based on observations of a single quality characteristic. Generally, it is based on observations of family of properties, so-called `multiple responses'. These multiple responses are often interacting and are measured in variety of units. Due to presence of interaction(s), overall optimal conditions for all the responses rarely result from isolated optimal condition of individual response. Conventional optimization techniques, such as design of experiment, linear and nonlinear programmings are generally recommended for single response optimization problems. Applying any of these techniques for multiple response optimization problem may lead to unnecessary simplification of the real problem with several restrictive model assumptions. In addition, engineering judgements or subjective ways of decision making may play an important role to apply some of these conventional techniques. In this context, a synergistic approach of desirability functions and metaheuristic technique is a viable alternative to handle multiple response optimization problems. Metaheuristics, such as simulated annealing (SA) and particle swarm optimization (PSO), have shown immense success to solve various discrete and continuous single response optimization problems. Instigated by those successful applications, this chapter assesses the potential of a Nelder-Mead simplex-based SA (SIMSA) and PSO to resolve varied multiple response optimization problems. The computational results clearly indicate the superiority of PSO over SIMSA for the selected problems.

  12. Target-classification approach applied to active UXO sites

    NASA Astrophysics Data System (ADS)

    Shubitidze, F.; Fernández, J. P.; Shamatava, Irma; Barrowes, B. E.; O'Neill, K.

    2013-06-01

    This study is designed to illustrate the discrimination performance at two UXO active sites (Oklahoma's Fort Sill and the Massachusetts Military Reservation) of a set of advanced electromagnetic induction (EMI) inversion/discrimination models which include the orthonormalized volume magnetic source (ONVMS), joint diagonalization (JD), and differential evolution (DE) approaches and whose power and flexibility greatly exceed those of the simple dipole model. The Fort Sill site is highly contaminated by a mix of the following types of munitions: 37-mm target practice tracers, 60-mm illumination mortars, 75-mm and 4.5'' projectiles, 3.5'', 2.36'', and LAAW rockets, antitank mine fuzes with and without hex nuts, practice MK2 and M67 grenades, 2.5'' ballistic windshields, M2A1-mines with/without bases, M19-14 time fuzes, and 40-mm practice grenades with/without cartridges. The site at the MMR site contains targets of yet different sizes. In this work we apply our models to EMI data collected using the MetalMapper (MM) and 2 × 2 TEMTADS sensors. The data for each anomaly are inverted to extract estimates of the extrinsic and intrinsic parameters associated with each buried target. (The latter include the total volume magnetic source or NVMS, which relates to size, shape, and material properties; the former includes location, depth, and orientation). The estimated intrinsic parameters are then used for classification performed via library matching and the use of statistical classification algorithms; this process yielded prioritized dig-lists that were submitted to the Institute for Defense Analyses (IDA) for independent scoring. The models' classification performance is illustrated and assessed based on these independent evaluations.

  13. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  14. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  15. A Model for Applying Lexical Approach in Teaching Russian Grammar.

    ERIC Educational Resources Information Center

    Gettys, Serafima

    The lexical approach to teaching Russian grammar is explained, an instructional sequence is outlined, and a classroom study testing the effectiveness of the approach is reported. The lexical approach draws on research on cognitive psychology, second language acquisition theory, and research on learner language. Its bases in research and its…

  16. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  17. Universal Approach to Optimal Photon Storage in Atomic Media

    SciTech Connect

    Gorshkov, Alexey V.; Andre, Axel; Lukin, Mikhail D.; Fleischhauer, Michael; Soerensen, Anders S.

    2007-03-23

    We present a universal physical picture for describing storage and retrieval of photon wave packets in a {lambda}-type atomic medium. This physical picture encompasses a variety of different approaches to pulse storage ranging from adiabatic reduction of the photon group velocity and pulse-propagation control via off-resonant Raman fields to photon-echo-based techniques. Furthermore, we derive an optimal control strategy for storage and retrieval of a photon wave packet of any given shape. All these approaches, when optimized, yield identical maximum efficiencies, which only depend on the optical depth of the medium.

  18. Optimized variable source-profile approach for source apportionment

    NASA Astrophysics Data System (ADS)

    Marmur, Amit; Mulholland, James A.; Russell, Armistead G.

    An expanded chemical mass balance (CMB) approach for PM 2.5 source apportionment is presented in which both the local source compositions and corresponding contributions are determined from ambient measurements and initial estimates of source compositions using a global-optimization mechanism. Such an approach can serve as an alternative to using predetermined (measured) source profiles, as traditionally used in CMB applications, which are not always representative of the region and/or time period of interest. Constraints based on ranges of typical source profiles are used to ensure that the compositions identified are representative of sources and are less ambiguous than the factors/sources identified by typical factor analysis (FA) techniques. Gas-phase data (SO 2, CO and NO y) are also used, as these data can assist in identifying sources. Impacts of identified sources are then quantified by minimizing the weighted-error between apportioned and measured levels of the fitting species. This technique was applied to a dataset of PM 2.5 measurements at the former Atlanta Supersite (Jefferson Street site), to apportion PM 2.5 mass into nine source categories. Good agreement is found when these source impacts are compared with those derived based on measured source profiles as well as those derived using a current FA technique, Positive Matrix Factorization. The proposed method can be used to assess the representativeness of measured source-profiles and to help identify those profiles that may be in significant error, as well as to quantify uncertainties in source-impact estimates, due in part to uncertainties in source compositions.

  19. Geomorphological Approach to Glacial and Snow Modeling applied to Hydrology

    NASA Astrophysics Data System (ADS)

    Gsell, P.; Le Moine, N.; Ribstein, P.

    2012-12-01

    Hydrological modeling of mountainous watershed has specific problems due to the effect of ice and snow cover in a certain range of altitude. The representation of the snow and ice storage dyanmics is a main issue for the understanding of mountainous hydrosystems mechanisms for future and past climate. That's also an operational concern for watersheds equipped with hydroelectric dams, whose dimensioning and electric capacity evaluation rely on a good understanding of ice-snow dynamics, in particular for a lapse of several years. The objective of the study is to get ahead, at a theoretical view, in a way in between classical representation used in hydrological models (infinity of ice stock) and 3D ice tongues modeling describing explicitly viscous glacier evolution at a river basin scale. Geomorphology will be used in this approach. Noticing that glaciers, at a catchment scale, take the drainage system as a geometrical framework, an axe of our study lies on the coupling of the probabilistic description of the river network with determinist glacier models using concepts that already have been used in hydrology modeling like Geomorphological Instantaneous Unitary Hydrogram. By analogy, a simplified glacier model (Shallow Ice Approximation or Minimal Glacier Models) will be put together as a transfer function to simulate large scale ablation and ice front dynamics. In our study, we analyze the distribution of upstream area for a dataset of 78 river basins in the Southern Rocky Mountains. In a certain range of scale and under a few assumptions, we use a statistic model for river networks description that we adapt by considering relief by linking hypsometry and morphology. The model developed P(A>a,z) allow us to identify any site of the river network from a DEM analysis via elevation z and upstream area a fields with the help of 2 parameters. The 3D consideration may be relevant for hydrologic implications as production function usually increases with relief. This model

  20. A general optimization method applied to a vdW-DF functional for water

    NASA Astrophysics Data System (ADS)

    Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi

    In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.

  1. Optimized perturbation theory applied to jet cross sections in e + e - annihilation

    NASA Astrophysics Data System (ADS)

    Kramer, G.; Lampe, B.

    1988-03-01

    The optimized perturbation theory proposed by Stevenson to deal with coupling constant scheme dependence is applied to the calculation of π tot and jet multiplicities in e + e - annihilation. The results are compared with those of simple perturbation theory and with recent experimental cluster multiplicities.

  2. Optimal Flight for Ground Noise Reduction in Helicopter Landing Approach: Optimal Altitude and Velocity Control

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi; Ishii, Hirokazu; Uchida, Junichi; Gomi, Hiromi; Matayoshi, Naoki; Okuno, Yoshinori

    This study aims to obtain the optimal flights of a helicopter that reduce ground noise during landing approach with an optimization technique, and to conduct flight tests for confirming the effectiveness of the optimal solutions. Past experiments of Japan Aerospace Exploration Agency (JAXA) show that the noise of a helicopter varies significantly according to its flight conditions, especially depending on the flight path angle. We therefore build a simple noise model for a helicopter, in which the level of the noise generated from a point sound source is a function only of the flight path angle. Using equations of motion for flight in a vertical plane, we define optimal control problems for minimizing noise levels measured at points on the ground surface, and obtain optimal controls for specified initial altitudes, flight constraints, and wind conditions. The obtained optimal flights avoid the flight path angle which generates large noise and decrease the flight time, which are different from conventional flight. Finally, we verify the validity of the optimal flight patterns through flight experiments. The actual flights following the optimal paths resulted in noise reduction, which shows the effectiveness of the optimization.

  3. The optimality of potential rescaling approaches in land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  4. Successive linear optimization approach to the dynamic traffic assignment problem

    SciTech Connect

    Ho, J.K.

    1980-11-01

    A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.

  5. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  6. Optimizing selection of controllable variables to minimize downwind drift from aerially applied sprays

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Drift of aerially applied crop protection and production materials is studied using a novel simulation-based approach. This new approach first studies many factors that can potentially contribute to downwind deposition from aerial spray application to narrow down the major contributing factors. An o...

  7. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  8. AI approach to optimal var control with fuzzy reactive loads

    SciTech Connect

    Abdul-Rahman, K.H.; Shahidehpour, S.M.; Daneshdoost, M.

    1995-02-01

    This paper presents an artificial intelligence (AI) approach to the optimal reactive power (var) control problem. The method incorporates the reactive load uncertainty in optimizing the overall system performance. The artificial neural network (ANN) enhanced by fuzzy sets is used to determine the memberships of control variables corresponding to the given load values. A power flow solution will determine the corresponding state of the system. Since the resulting system state may not be feasible in real-time, a heuristic method based on the application of sensitivities in expert system is employed to refine the solution with minimum adjustments of control variables. Test cases and numerical results demonstrate the applicability of the proposed approach. Simplicity, processing speed and ability to model load uncertainties make this approach a viable option for on-line var control.

  9. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  10. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. PMID:26362314

  11. Shape Optimization and Supremal Minimization Approaches in Landslides Modeling

    SciTech Connect

    Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas

    2005-10-15

    The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.

  12. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  13. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  14. Teaching Social Science Research: An Applied Approach Using Community Resources.

    ERIC Educational Resources Information Center

    Gilliland, M. Janice; And Others

    A four-week summer project for 100 rural tenth graders in the University of Alabama's Biomedical Sciences Preparation Program (BioPrep) enabled students to acquire and apply social sciences research skills. The students investigated drinking water quality in three rural Alabama counties by interviewing local officials, health workers, and…

  15. Experimental and applied approaches to control Salmonella in broiler processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Control of Salmonella on poultry meat should ideally include efforts from the breeder farm to the fully processed and further processed product on through consumer education. In the U.S. regulatory scrutiny is often applied at the chill tank. Therefore, processing parameters are an important compo...

  16. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  17. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  18. Sequential activation of metabolic pathways: a dynamic optimization approach.

    PubMed

    Oyarzún, Diego A; Ingalls, Brian P; Middleton, Richard H; Kalamatianos, Dimitrios

    2009-11-01

    The regulation of cellular metabolism facilitates robust cellular operation in the face of changing external conditions. The cellular response to this varying environment may include the activation or inactivation of appropriate metabolic pathways. Experimental and numerical observations of sequential timing in pathway activation have been reported in the literature. It has been argued that such patterns can be rationalized by means of an underlying optimal metabolic design. In this paper we pose a dynamic optimization problem that accounts for time-resource minimization in pathway activation under constrained total enzyme abundance. The optimized variables are time-dependent enzyme concentrations that drive the pathway to a steady state characterized by a prescribed metabolic flux. The problem formulation addresses unbranched pathways with irreversible kinetics. Neither specific reaction kinetics nor fixed pathway length are assumed.In the optimal solution, each enzyme follows a switching profile between zero and maximum concentration, following a temporal sequence that matches the pathway topology. This result provides an analytic justification of the sequential activation previously described in the literature. In contrast with the existent numerical approaches, the activation sequence is proven to be optimal for a generic class of monomolecular kinetics. This class includes, but is not limited to, Mass Action, Michaelis-Menten, Hill, and some Power-law models. This suggests that sequential enzyme expression may be a common feature of metabolic regulation, as it is a robust property of optimal pathway activation. PMID:19412635

  19. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  20. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  1. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces. PMID:27625978

  2. Laser therapy applying the differential approaches and biophotometrical elements

    NASA Astrophysics Data System (ADS)

    Mamedova, F. M.; Akbarova, Ju. A.; Umarova, D. A.; Yudin, G. A.

    1995-04-01

    The aim of the present paper is the presentation of biophotometrical data obtained from various anatomic-topographical mouth areas to be used for the development of differential approaches to laser therapy in dentistry. Biophotometrical measurements were carried out using a portative biophotometer, as a portion of a multifunctional equipping system of laser therapy, acupuncture and biophotometry referred to as 'Aura-laser'. The results of biophotometrical measurements allow the implementation of differential approaches to laser therapy of parodontitis and mucous mouth tissue taking their clinic form and rate of disease into account.

  3. RePAMO: Recursive Perturbation Approach for Multimodal Optimization

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Divya, Kotha; Mehta, Vivek Kumar; Deb, Kalyanmoy

    2013-09-01

    In this article, a strategy is presented to exploit classical algorithms for multimodal optimization problems, which recursively applies any suitable local optimization method, in the present case Nelder and Mead's simplex search method, in the search domain. The proposed method follows a systematic way to restart the algorithm. The idea of climbing the hills and sliding down to the neighbouring valleys is utilized. The implementation of the algorithm finds local minima as well as maxima. The concept of perturbing the minimum/maximum in several directions and restarting the algorithm for maxima/minima is introduced. The method performs favourably in comparison to other global optimization methods. The results of this algorithm, named RePAMO, are compared with the GA-clearing and ASMAGO techniques in terms of the number of function evaluations. Based on the results, it has been found that the RePAMO outperforms GA clearing and ASMAGO by a significant margin.

  4. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    ERIC Educational Resources Information Center

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  5. Dialogical Approach Applied in Group Counselling: Case Study

    ERIC Educational Resources Information Center

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  6. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  7. Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.

    ERIC Educational Resources Information Center

    Deonanan, Carlton R.; Deonanan, Venus E.

    This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…

  8. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. PMID:24704153

  9. A Control Engineering Approach for Designing an Optimized Treatment Plan for Fibromyalgia

    PubMed Central

    Deshpande, Sunil; Nandola, Naresh N.; Rivera, Daniel E.; Younger, Jarred

    2011-01-01

    Control engineering offers a systematic and efficient means for optimizing the effectiveness of behavioral interventions. In this paper, we present an approach to develop dynamical models and subsequently, hybrid model predictive control schemes for assigning optimal dosages of naltrexone as treatment for a chronic pain condition known as fibromyalgia. We apply system identification techniques to develop models from daily diary reports completed by participants of a naltrexone intervention trial. The dynamic model serves as the basis for applying model predictive control as a decision algorithm for automated dosage selection of naltrexone in the face of the external disturbances. The categorical/discrete nature of the dosage assignment creates a need for hybrid model predictive control (HMPC) schemes. Simulation results that include conditions of significant plant-model mismatch demonstrate the performance and applicability of hybrid predictive control for optimized adaptive interventions for fibromyalgia treatment involving naltrexone. PMID:22034548

  10. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  11. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  12. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  13. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  14. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  15. Particle Swarm and Ant Colony Approaches in Multiobjective Optimization

    NASA Astrophysics Data System (ADS)

    Rao, S. S.

    2010-10-01

    The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.

  16. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  17. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  18. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  19. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  20. A Model Driven Engineering Approach Applied to Master Data Management

    NASA Astrophysics Data System (ADS)

    Menet, Ludovic; Lamolle, Myriam

    The federation of data sources and the definition of pivot models are strongly interrelated topics. This paper explores a mediation solution based on XML architecture and the concept of Master Data Management. In this solution, pivot models use the standard XML Schema allowing the definition of complex data structures. The introduction of a MDE approach is a means to make modeling easier. We use UML as an abstract modeling layer. UML is a modeling object language, which is more and more used and recognized as a standard in the software engineering field, which makes it an ideal candidate for the modeling of XML Schema models. In this purpose we introduce features of the UML formalism, through profiles, to facilitate the definition and the exchange of models.

  1. Total Risk Approach in Applying PRA to Criticality Safety

    SciTech Connect

    Huang, S T

    2005-03-24

    As nuclear industry continues marching from an expert-base support to more procedure-base support, it is important to revisit the total risk concept to criticality safety. A key objective of criticality safety is to minimize total criticality accident risk. The purpose of this paper is to assess key constituents of total risk concept pertaining to criticality safety from an operations support perspective and to suggest a risk-informed means of utilizing criticality safety resources for minimizing total risk. A PRA methodology was used to assist this assessment. The criticality accident history was assessed to provide a framework for our evaluation. In supporting operations, the work of criticality safety engineers ranges from knowing the scope and configurations of a proposed operation, performing criticality hazards assessment to derive effective controls, assisting in training operators, response to floor questions, surveillance to ensure implementation of criticality controls, and response to criticality mishaps. In a compliance environment, the resource of criticality safety engineers is increasingly being directed towards tedious documentation effort to meet some regulatory requirements to the effect of weakening the floor support for criticality safety. By applying a fault tree model to identify the major contributors of criticality accidents, a total risk picture is obtained to address relative merits of various actions. Overall, human failure is the key culprit in causing criticality accidents. Factors such as failure to follow procedures, lacks of training, lack of expert support at the floor level etc. are main contributors. Other causes may include lack of effective criticality controls such as inadequate criticality safety evaluation. Not all of the causes are equally important in contributing to criticality mishaps. Applying the limited resources to strengthen the weak links would reduce risk more than continuing emphasis on the strong links of

  2. [Statistical Process Control applied to viral genome screening: experimental approach].

    PubMed

    Reifenberg, J M; Navarro, P; Coste, J

    2001-10-01

    During the National Multicentric Study concerning the introduction of NAT for HCV and HIV-1 viruses in blood donation screening which was supervised by the Medical and Scientific departments of the French Blood Establishment (Etablissement français du sang--EFS), Transcription-Mediated transcription Amplification (TMA) technology (Chiron/Gen Probe) was experimented in the Molecular Biology Laboratory of Montpellier, EFS Pyrénées-Méditerranée. After a preliminary phase of qualification of the material and training of the technicians, routine screening of homologous blood and apheresis donations using this technology was applied for two months. In order to evaluate the different NAT systems, exhaustive daily operations and data were registered. Among these, the luminescence results expressed as RLU of the positive and negative calibrators and the associated internal controls were analysed using Control Charts, Statistical Process Control methods, which allow us to display rapidly process drift and to anticipate the appearance of incidents. This study demonstrated the interest of these quality control methods, mainly used for industrial purposes, to follow and to increase the quality of any transfusion process. it also showed the difficulties of the post-investigations of uncontrolled sources of variations of a process which was experimental. Such tools are in total accordance with the new version of the ISO 9000 norms which are particularly focused on the use of adapted indicators for processes control, and could be extended to other transfusion activities, such as blood collection and component preparation. PMID:11729395

  3. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  4. Applying the community partnership approach to human biology research.

    PubMed

    Ravenscroft, Julia; Schell, Lawrence M; Cole, Tewentahawih'tha'

    2015-01-01

    Contemporary human biology research employs a unique skillset for biocultural analysis. This skillset is highly appropriate for the study of health disparities because disparities result from the interaction of social and biological factors over one or more generations. Health disparities research almost always involves disadvantaged communities owing to the relationship between social position and health in stratified societies. Successful research with disadvantaged communities involves a specific approach, the community partnership model, which creates a relationship beneficial for researcher and community. Paramount is the need for trust between partners. With trust established, partners share research goals, agree on research methods and produce results of interest and importance to all partners. Results are shared with the community as they are developed; community partners also provide input on analyses and interpretation of findings. This article describes a partnership-based, 20 year relationship between community members of the Akwesasne Mohawk Nation and researchers at the University at Albany. As with many communities facing health disparity issues, research with Native Americans and indigenous peoples generally is inherently politicized. For Akwesasne, the contamination of their lands and waters is an environmental justice issue in which the community has faced unequal exposure to, and harm by environmental toxicants. As human biologists engage in more partnership-type research, it is important to understand the long term goals of the community and what is at stake so the research circle can be closed and 'helicopter' style research avoided. PMID:25380288

  5. Applying electrical utility least-cost approach to transportation planning

    SciTech Connect

    McCoy, G.A.; Growdon, K.; Lagerberg, B.

    1994-09-01

    Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.

  6. New Approach to Ultrasonic Spectroscopy Applied to Flywheel Rotors

    NASA Technical Reports Server (NTRS)

    Harmon, Laura M.; Baaklini, George Y.

    2002-01-01

    Flywheel energy storage devices comprising multilayered composite rotor systems are being studied extensively for use in the International Space Station. A flywheel system includes the components necessary to store and discharge energy in a rotating mass. The rotor is the complete rotating assembly portion of the flywheel, which is composed primarily of a metallic hub and a composite rim. The rim may contain several concentric composite rings. This article summarizes current ultrasonic spectroscopy research of such composite rings and rims and a flat coupon, which was manufactured to mimic the manufacturing of the rings. Ultrasonic spectroscopy is a nondestructive evaluation (NDE) method for material characterization and defect detection. In the past, a wide bandwidth frequency spectrum created from a narrow ultrasonic signal was analyzed for amplitude and frequency changes. Tucker developed and patented a new approach to ultrasonic spectroscopy. The ultrasonic system employs a continuous swept-sine waveform and performs a fast Fourier transform on the frequency spectrum to create the spectrum resonance spacing domain, or fundamental resonant frequency. Ultrasonic responses from composite flywheel components were analyzed at Glenn to assess this NDE technique for the quality assurance of flywheel applications.

  7. Applying the Taguchi method to optimize sumatriptan succinate niosomes as drug carriers for skin delivery.

    PubMed

    González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M

    2012-10-01

    Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment. PMID:22806266

  8. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  9. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  10. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  11. Mouse genetic approaches applied to the normal tissue radiation response

    PubMed Central

    Haston, Christina K.

    2012-01-01

    The varying responses of inbred mouse models to radiation exposure present a unique opportunity to dissect the genetic basis of radiation sensitivity and tissue injury. Such studies are complementary to human association studies as they permit both the analysis of clinical features of disease, and of specific variants associated with its presentation, in a controlled environment. Herein I review how animal models are studied to identify specific genetic variants influencing predisposition to radiation-induced traits. Among these radiation-induced responses are documented strain differences in repair of DNA damage and in extent of tissue injury (in the lung, skin, and intestine) which form the base for genetic investigations. For example, radiation-induced DNA damage is consistently greater in tissues from BALB/cJ mice, than the levels in C57BL/6J mice, suggesting there may be an inherent DNA damage level per strain. Regarding tissue injury, strain specific inflammatory and fibrotic phenotypes have been documented for principally, C57BL/6 C3H and A/J mice but a correlation among responses such that knowledge of the radiation injury in one tissue informs of the response in another is not evident. Strategies to identify genetic differences contributing to a trait based on inbred strain differences, which include linkage analysis and the evaluation of recombinant congenic (RC) strains, are presented, with a focus on the lung response to irradiation which is the only radiation-induced tissue injury mapped to date. Such approaches are needed to reveal genetic differences in susceptibility to radiation injury, and also to provide a context for the effects of specific genetic variation uncovered in anticipated clinical association studies. In summary, mouse models can be studied to uncover heritable variation predisposing to specific radiation responses, and such variations may point to pathways of importance to phenotype development in the clinic. PMID:22891164

  12. Geophysical approaches applied in the ancient theatre of Demetriada, Volos

    NASA Astrophysics Data System (ADS)

    Sarris, Apostolos; Papadopoulos, Nikos; Déderix, Sylviane; Salvi, Maria-Christina

    2013-08-01

    The city of Demetriada was constructed around 294-292 BC and became a stronghold of the Macedonian navy fleet, whereas in the Roman period it experienced significant growth and blossoming. The ancient theatre of the town was constructed at the same time with the foundation of the city, without being used for 2 centuries (1st ce. BC - 1st ce. A.D.) and being completely abandoned after the 4th ce. A.D., to be used only as a quarry for extraction of building material for Christian basilicas in the area. The theatre was found in 1809 and excavations took place in various years since 1907. Geophysical approaches were exploited recently in an effort to map the subsurface of the surrounding area of the theatre and help the reconstruction works of it. Magnetic gradiometry, Ground Penetrating Radar (GPR) and Electrical Resistivity Tomogrpahy (ERT) techniques were employed for mapping the area of the orchestra and the scene of the theatre, together with the area extending to the south of the theatre. A number of features were recognized by the magnetic techniques including older excavation trenches and the pilar of the stoa of the proscenium. The different occupation phases of the area have been manifested through the employment of tomographic and stratigraphic geophysical techniques like three-dimensional ERT and GPR. Architectural orthogonal structures aligned in a S-N direction have been correlated to the already excavated buildings of the ceramic workshop. The workshop seems to expand in a large section of the area which was probably constructed after the final abandonment of the theatre.

  13. Optimal multiyear management of a water supply system under uncertainty: Robust counterpart approach

    NASA Astrophysics Data System (ADS)

    Housh, Mashor; Ostfeld, Avi; Shamir, Uri

    2011-10-01

    In this paper, the robust counterpart (RC) approach (Ben-Tal et al., 2009) is applied to optimize management of a water supply system (WSS) fed from aquifers and desalination plants. The water is conveyed through a network to meet desired consumptions, where the aquifers recharges are uncertain. The objective is to minimize the net present value cost of multiyear operation, satisfying operational and physical constraints. The RC is a min-max guided approach, which converts the original problem into a deterministic equivalent problem, requiring only that the uncertain parameters resides within a user-defined uncertainty set. The robust policy obtained by the RC approach is compared with polices obtained by other decision-making approaches including stochastic approaches.

  14. Applying ILT mask synthesis for co-optimizing design rules and DSA process characteristics

    NASA Astrophysics Data System (ADS)

    Dam, Thuc; Stanton, William

    2014-03-01

    During early stage development of a DSA process, there are many unknown interactions between design, DSA process, RET, and mask synthesis. The computational resolution of these unknowns can guide development towards a common process space whereby manufacturing success can be evaluated. This paper will demonstrate the use of existing Inverse Lithography Technology (ILT) to co-optimize the multitude of parameters. ILT mask synthesis will be applied to a varied hole design space in combination with a range of DSA model parameters under different illumination and RET conditions. The design will range from 40 nm pitch doublet to random DSA designs with larger pitches, while various effective DSA characteristics of shrink bias and corner smoothing will be assumed for the DSA model during optimization. The co-optimization of these design parameters and process characteristics under different SMO solutions and RET conditions (dark/bright field tones and binary/PSM mask types) will also help to provide a complete process mapping of possible manufacturing options. The lithographic performances for masks within the optimized parameter space will be generated to show a common process space with the highest possibility for success.

  15. A second law approach to exhaust system optimization

    SciTech Connect

    Primus, R.J.

    1984-01-01

    A model has been constructed that applies second law analysis to a Fanno formulation of the exhaust process of a turbocharged diesel engine. The model has been used to quantify available energy destruction at the valve and in the manifold and to study the influence of various system parameters on the relative magnitude of these exhaust system losses. The model formulation and its application to the optimization of the exhaust manifold diameter is discussed. Data are then presented which address the influence of the manifold friction, turbine efficiency, turbine power extraction, valve flow area, compression ratio, speed, load and air-fuel ratio on the available energy destruction in the exhaust system.

  16. SolOpt: A Novel Approach to Solar Rooftop Optimization

    SciTech Connect

    Lisell, L.; Metzger, I.; Dean, J.

    2011-01-01

    Traditionally Photovoltaic Technology (PV) and Solar Hot Water Technology (SHW) have been designed with separate design tools, making it difficult to determine the appropriate mix of PV and SHW. A new tool developed at the National Renewable Energy Laboratory changes how the analysis is conducted through an integrated approach based on the life cycle cost effectiveness of each system. With 10 inputs someone with only basic knowledge of the building can simulate energy production from PV and SHW, and predict the optimal sizes of the systems. The user can select from four optimization criteria currently available: Greenhouse Gas Reduction, Net-Present Value, Renewable Energy Production, and Discounted Payback Period. SolOpt provides unique analysis capabilities that aren't currently available in any other software programs. Validation results with industry accepted tools for both SHW and PV are presented.

  17. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states. PMID:17959783

  18. A new integrated approach to seismic network optimization

    NASA Astrophysics Data System (ADS)

    Tramelli, A.; De Natale, G.; Troise, C.; Orazi, M.

    2012-04-01

    a new one, considering different earthquake positions and different noise levels for the station sites. The optimization for moment tensor solutions is also implemented, by formally defining the inverse problem in matrix form. The algorithms are then tested and applied to optimize the network of Campi Flegrei.

  19. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  20. An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Watts, Stephen R.; Garg, Sanjay

    1995-01-01

    This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.

  1. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  2. Optimal active power dispatch by network flow approach

    SciTech Connect

    Carvalho, M.F. ); Soares, S.; Ohishi, T. )

    1988-11-01

    In this paper the optimal active power dispatch problem is formulated as a nonlinear capacitated network flow problem with additional linear constraints. Transmission flow limits and both Kirchhoff's laws are taken into account. The problem is solved by a Generalized Upper Bounding technique that takes advantage of the network flow structure of the problem. The new approach has potential applications on power systems problems such as economic dispatch, load supplying capability, minimum load shedding, and generation-transmission reliability. The paper also reviews the use of transportation models for power system analysis. A detailed illustrative example is presented.

  3. Optimized Chemical Separation and Measurement by TE TIMS Using Carburized Filaments for Uranium Isotope Ratio Measurements Applied to Plutonium Chronometry.

    PubMed

    Sturm, Monika; Richter, Stephan; Aregbe, Yetunde; Wellum, Roger; Prohaska, Thomas

    2016-06-21

    An optimized method is described for U/Pu separation and subsequent measurement of the amount contents of uranium isotopes by total evaporation (TE) TIMS with a double filament setup combined with filament carburization for age determination of plutonium samples. The use of carburized filaments improved the signal behavior for total evaporation TIMS measurements of uranium. Elevated uranium ion formation by passive heating during rhenium signal optimization at the start of the total evaporation measurement procedure was found to be a result from byproducts of the separation procedure deposited on the filament. This was avoided using carburized filaments. Hence, loss of sample before the actual TE data acquisition was prevented, and automated measurement sequences could be accomplished. Furthermore, separation of residual plutonium in the separated uranium fraction was achieved directly on the filament by use of the carburized filaments. Although the analytical approach was originally tailored to achieve reliable results only for the (238)Pu/(234)U, (239)Pu/(235)U, and (240)Pu/(236)U chronometers, the optimization of the procedure additionally allowed the use of the (242)Pu/(238)U isotope amount ratio as a highly sensitive indicator for residual uranium present in the sample, which is not of radiogenic origin. The sample preparation method described in this article has been successfully applied for the age determination of CRM NBS 947 and other sulfate and oxide plutonium samples. PMID:27240571

  4. Adaptive sequentially space-filling metamodeling applied in optimal water quantity allocation at basin scale

    NASA Astrophysics Data System (ADS)

    Mousavi, S. Jamshid; Shourian, M.

    2010-03-01

    Global optimization models in many problems suffer from high computational costs due to the need for performing high-fidelity simulation models for objective function evaluations. Metamodeling is a useful approach to dealing with this problem in which a fast surrogate model replaces the detailed simulation model. However, training of the surrogate model needs enough input-output data which in case of absence of observed data, each of them must be obtained by running the simulation model and may still cause computational difficulties. In this paper a new metamodeling approach called adaptive sequentially space filling (ASSF) is presented by which the regions in the search space that need more training data are sequentially identified and the process of design of experiments is performed adaptively. Performance of the ASSF approach is tested against a benchmark function optimization problem and optimum basin-scale water allocation problems, in which the MODSIM river basin decision support system is approximated. Results show the ASSF model with fewer actual function evaluations is able to find comparable solutions to other metamodeling techniques using random sampling and evolution control strategies.

  5. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  6. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  7. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  8. Silanization of glass chips—A factorial approach for optimization

    NASA Astrophysics Data System (ADS)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  9. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  10. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  11. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  12. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGESBeta

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  13. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  14. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  15. A systems biology approach to radiation therapy optimization.

    PubMed

    Brahme, Anders; Lind, Bengt K

    2010-05-01

    During the last 20 years, the field of cellular and not least molecular radiation biology has been developed substantially and can today describe the response of heterogeneous tumors and organized normal tissues to radiation therapy quite well. An increased understanding of the sub-cellular and molecular response is leading to a more general systems biological approach to radiation therapy and treatment optimization. It is interesting that most of the characteristics of the tissue infrastructure, such as the vascular system and the degree of hypoxia, have to be considered to get an accurate description of tumor and normal tissue responses to ionizing radiation. In the limited space available, only a brief description of some of the most important concepts and processes is possible, starting from the key functional genomics pathways of the cell that are not only responsible for tumor development but also responsible for the response of the cells to radiation therapy. The key mechanisms for cellular damage and damage repair are described. It is further more discussed how these processes can be brought to inactivate the tumor without severely damaging surrounding normal tissues using suitable radiation modalities like intensity-modulated radiation therapy (IMRT) or light ions. The use of such methods may lead to a truly scientific approach to radiation therapy optimization, particularly when invivo predictive assays of radiation responsiveness becomes clinically available at a larger scale. Brief examples of the efficiency of IMRT are also given showing how sensitive normal tissues can be spared at the same time as highly curative doses are delivered to a tumor that is often radiation resistant and located near organs at risk. This new approach maximizes the probability to eradicate the tumor, while at the same time, adverse reactions in sensitive normal tissues are as far as possible minimized using IMRT with photons and light ions. PMID:20191284

  16. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  17. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change. PMID:23552255

  18. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  19. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499

  20. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  1. Numerical optimization approaches of single-pulse conduction laser welding by beam shape tailoring

    NASA Astrophysics Data System (ADS)

    Sundqvist, J.; Kaplan, A. F. H.; Shachaf, L.; Brodsky, A.; Kong, C.; Blackburn, J.; Assuncao, E.; Quintino, L.

    2016-04-01

    While circular laser beams are usually applied in laser welding, for certain applications tailoring of the laser beam shape, e.g. by diffractive optical elements, can optimize the process. A case where overlap conduction mode welding should be used to produce a C-shaped joint was studied. For the dimensions studied in this paper, the weld joint deviated significantly from the C-shape of the single-pulse laser beam. Because of the complex heat flow interactions, the process requires optimization. Three approaches for extracting quantitative indicators for understanding the essential heat flow contributions process and for optimizing the C-shape of the weld and of the laser beam were studied and compared. While integral energy properties through a control volume and temperature gradients at key locations only partially describe the heat flow behaviour, the geometrical properties of the melt pool isotherm proved to be the most reliable method for optimization. While pronouncing the C-ends was not sufficient, an additional enlargement of the laser beam produced the desired C-shaped weld joint. The approach is analysed and the potential for generalization is discussed.

  2. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework.

    PubMed

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  3. Optimization of glycerol fed-batch fermentation in different reactor states: a variable kinetic parameter approach.

    PubMed

    Xie, Dongming; Liu, Dehua; Zhu, Haoli; Zhang, Jianan

    2002-05-01

    To optimize the fed-batch processes of glycerol fermentation in different reactor states, typical bioreactors including 500-mL shaking flask, 600-mL and 15-L airlift loop reactor, and 5-L stirred vessel were investigated. It was found that by reestimating the values of only two variable kinetic parameters associated with physical transport phenomena in a reactor, the macrokinetic model of glycerol fermentation proposed in previous work could describe well the batch processes in different reactor states. This variable kinetic parameter (VKP) approach was further applied to model-based optimization of discrete-pulse feed (DPF) strategies of both glucose and corn steep slurry for glycerol fed-batch fermentation. The experimental results showed that, compared with the feed strategies determined just by limited experimental optimization in previous work, the DPF strategies with VKPs adjusted could improve glycerol productivity at least by 27% in the scale-down and scale-up reactor states. The approach proposed appeared promising for further modeling and optimization of glycerol fermentation or the similar bioprocesses in larger scales. PMID:12049203

  4. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  5. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  6. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  7. [OPTIMAL APPROACH TO COMBINED TREATMENT OF PATIENTS WITH UROGENITAL PAPILLOMATOSIS].

    PubMed

    Breusov, A A; Kulchavenya, E V; Brizhatyukl, E V; Filimonov, P N

    2015-01-01

    The review analyzed 59 sources of domestic and foreign literature on the use of immunomodulator izoprinozin in treating patients infected with human papilloma virus, and the results of their own experience. The high prevalence of HPV and its role in the development of cervical cancer are shown, the mechanisms of HPV development and the host protection from this infection are described. The authors present approaches to the treatment of HPV-infected patients with particular attention to izoprinozin. Isoprinosine belongs to immunomodulators with antiviral activity. It inhibits the replication of viral DNA and RNA by binding to cell ribosomes and changing their stereochemical structure. HPV infection, especially in the early stages, may be successfully cured till the complete elimination of the virus. Inosine Pranobex (izoprinozin) having dual action and the most abundant evidence base, may be recognized as the optimal treatment option. PMID:26859953

  8. Structural Query Optimization in Native XML Databases: A Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Haw, Su-Cheng; Lee, Chien-Sing

    As XML (eXtensible Mark-up Language) is gaining its popularity in data exchange over the Web, querying XML data has become an important issue to be addressed. In native XML databases (NXD), XML documents are usually modeled as trees and XML queries are typically specified in path expression. The primitive structural relationships are Parent-Child (P-C), Ancestor-Descendant (A-D), sibling and ordered query. Thus, a suitable and compact labeling scheme is crucial to identify these relationships and henceforth to process the query efficiently. We propose a novel labeling scheme consisting of < self-level:parent> to support all these relationships efficiently. Besides, we adopt the decomposition-matching-merging approach for structural query processing and propose a hybrid query optimization technique, TwigINLAB to process and optimize the twig query evaluation. Experimental results indicate that TwigINLAB can process all types of XML queries 15% better than the TwigStack algorithm in terms of execution time in most test cases.

  9. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  10. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  11. An improved ant colony optimization approach for optimization of process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  12. An Improved Ant Colony Optimization Approach for Optimization of Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  13. Genetic algorithm applied to the optimization of quantum cascade lasers with second harmonic generation

    SciTech Connect

    Gajić, A.; Radovanović, J. Milanović, V.; Indjin, D.; Ikonić, Z.

    2014-02-07

    A computational model for the optimization of the second order optical nonlinearities in GaInAs/AlInAs quantum cascade laser structures is presented. The set of structure parameters that lead to improved device performance was obtained through the implementation of the Genetic Algorithm. In the following step, the linear and second harmonic generation power were calculated by self-consistently solving the system of rate equations for carriers and photons. This rate equation system included both stimulated and simultaneous double photon absorption processes that occur between the levels relevant for second harmonic generation, and material-dependent effective mass, as well as band nonparabolicity, were taken into account. The developed method is general, in the sense that it can be applied to any higher order effect, which requires the photon density equation to be included. Specifically, we have addressed the optimization of the active region of a double quantum well In{sub 0.53}Ga{sub 0.47}As/Al{sub 0.48}In{sub 0.52}As structure and presented its output characteristics.

  14. An integrated approach for optimal design of micro gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Fuligno, Luca; Micheli, Diego; Poloni, Carlo

    2009-06-01

    The present work presents an approach for the optimized design of small gas turbine combustors, that integrates a 0-D code, CFD analyses and an advanced game theory multi-objective optimization algorithm. The output of the 0-D code is a baseline design of the combustor, given the required fuel characteristics, the basic geometry (tubular or annular) and the combustion concept (i.e. lean premixed primary zone or diffusive processes). For the optimization of the baseline design a simplified parametric CAD/mesher model is then defined and submitted to a CFD code. Free parameters of the optimization process are position and size of the liner hole arrays, their total area and the shape of the exit duct, while different objectives are the minimization of NOx emissions, pressure losses and combustor exit Pattern Factor. A 3D simulation of the optimized geometry completes the design procedure. As a first demonstrative example, the integrated design process was applied to a tubular combustion chamber with a lean premixed primary zone for a recuperative methane-fuelled small gas turbine of the 100 kW class.

  15. A hybrid approach using chaotic dynamics and global search algorithms for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Igeta, Hideki; Hasegawa, Mikio

    Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.

  16. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  17. MSE optimal bit-rate allocation in JPEG2000 Part 2 compression applied to a 3D data set

    NASA Astrophysics Data System (ADS)

    Kosheleva, Olga M.; Cabrera, Sergio D.; Usevitch, Bryan E.; Aguirre, Alberto; Vidal, Edward, Jr.

    2004-10-01

    A bit rate allocation (BRA) strategy is needed to optimally compress three-dimensional (3-D) data on a per-slice basis, treating it as a collection of two-dimensional (2-D) slices/components. This approach is compatible with the framework of JPEG2000 Part 2 which includes the option of pre-processing the slices with a decorrelation transform in the cross-component direction so that slices of transform coefficients are compressed. In this paper, we illustrate the impact of a recently developed inter-slice rate-distortion optimal bit-rate allocation approach that is applicable to this compression system. The approach exploits the MSE optimality of all JPEG2000 bit streams for all slices when each is produced in the quality progressive mode. Each bit stream can be used to produce a rate-distortion curve (RDC) for each slice that is MSE optimal at each bit rate of interest. The inter-slice allocation approach uses all RDCs for all slices to optimally select an overall optimal set of bit rates for all the slices using a constrained optimization procedure. The optimization is conceptually similar to Post-Compression Rate-Distortion optimization that is used within JPEG2000 to optimize bit rates allocated to codeblocks. Results are presented for two types of data sets: a 3-D computed tomography (CT) medical image, and a 3-D metereological data set derived from a particular modeling program. For comparison purposes, compression results are also illustrated for the traditional log-variance approach and for a uniform allocation strategy. The approach is illustrated using two decorrelation tranforms (the Karhunen Loeve transform, and the discrete wavelet transform) for which the inter-slice allocation scheme has the most impact.

  18. APPLIED CHEMICAL ECOLOGY OF THE ORIENTAL FRUIT MOTH: OPTIMIZING THE USE OF HAND-APPLIED DISPENSERS FOR MATING DISRUPTION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of synthetic sex pheromones for mating disruption has been widely adopted as an environmentally safe alternative to broad-spectrum insecticides to control many lepidopteran pest species [1]. Among the controlled release devices for insect pheromones, hand-applied dispensers are the most comm...

  19. Particle Swarm Optimization Applied to EEG Source Localization of Somatosensory Evoked Potentials.

    PubMed

    Shirvany, Yazdan; Mahmood, Qaiser; Edelvik, Fredrik; Jakobsson, Stefan; Hedstrom, Anders; Persson, Mikael

    2014-01-01

    One of the most important steps in presurgical diagnosis of medically intractable epilepsy is to find the precise location of the epileptogenic foci. Electroencephalography (EEG) is a noninvasive tool commonly used at epilepsy surgery centers for presurgical diagnosis. In this paper, a modified particle swarm optimization (MPSO) method is used to solve the EEG source localization problem. The method is applied to noninvasive EEG recording of somatosensory evoked potentials (SEPs) for a healthy subject. A 1 mm hexahedra finite element volume conductor model of the subject's head was generated using T1-weighted magnetic resonance imaging data. Special consideration was made to accurately model the skull and cerebrospinal fluid. An exhaustive search pattern and the MPSO method were then applied to the peak of the averaged SEP data and both identified the same region of the somatosensory cortex as the location of the SEP source. A clinical expert independently identified the expected source location, further corroborating the source analysis methods. The MPSO converged to the global minima with significantly lower computational complexity compared to the exhaustive search method that required almost 3700 times more evaluations. PMID:24122569

  20. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  1. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented

  2. Applying Computer Adaptive Testing to Optimize Online Assessment of Suicidal Behavior: A Simulation Study

    PubMed Central

    de Vries, Anton LM; de Groot, Marieke H; de Keijser, Jos; Kerkhof, Ad JFM

    2014-01-01

    Background The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce response burden and improve accuracy, and make the available pencil-and-paper tools more appropriate for online administration. Objective The aim was to test whether an item response–based computer adaptive simulation can be used to reduce the length of the Beck Scale for Suicide Ideation (BSS). Methods The data used for our simulation was obtained from a large multicenter trial from The Netherlands: the Professionals in Training to STOP suicide (PITSTOP suicide) study. We applied a principal components analysis (PCA), confirmatory factor analysis (CFA), a graded response model (GRM), and simulated a CAT. Results The scores of 505 patients were analyzed. Psychometric analyses showed the questionnaire to be unidimensional with good internal consistency. The computer adaptive simulation showed that for the estimation of elevation of risk of future suicidal behavior 4 items (instead of the full 19) were sufficient, on average. Conclusions This study demonstrated that CAT can be applied successfully to reduce the length of the Dutch version of the BSS. We argue that the use of CAT can improve the accuracy and the response burden when assessing the risk of future suicidal behavior online. Because CAT can be daunting for clinicians and applied scientists, we offer a concrete example of our computer adaptive simulation of the Dutch version of the BSS at the end of the paper. PMID:25213259

  3. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  4. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  5. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  6. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    ERIC Educational Resources Information Center

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  7. Eddy Currents applied to de-tumbling of space debris: feasibility analysis, design and optimization aspects

    NASA Astrophysics Data System (ADS)

    Ortiz Gómez, Natalia; Walker, Scott J. I.

    Existent studies on the evolution of the space debris population show that both mitigation measures and active debris removal methods are necessary in order to prevent the current population from growing. Active debris removal methods, which require contact with the target, show complications if the target is rotating at high speeds. Observed rotations go up to 50 deg/s combined with precession and nutation motions. “Natural” rotational damping in upper stages has been observed for some debris objects. This phenomenon occurs due to the eddy currents induced by the Earth’s magnetic field in the predominantly conductive materials of these man made rotating objects. The idea presented in this paper is to submit the satellite to an enhanced magnetic field in order to subdue it and damp its rotation, thus allowing for its subsequent de-orbiting phase. The braking method that is proposed has the advantage of avoiding any kind of mechanical contact with the target. A deployable structure with a magnetic coil at its end is used to induce the necessary braking torques on the target. This way, the induced magnetic field is created far away from the chaseŕs main body avoiding undesirable effects on its instruments. This paper focuses on the overall design of the system and the parameters considered are: the braking time, the power required, the mass of the deployable structure and the magnetic coil system, the size of the coil, the materials selection and distance to the target. The different equations that link all these variables together are presented. Nevertheless, these equations lead to several variables which make it possible to approach the engineering design as an optimization problem. Given that only a few variables remain, no sophisticated numerical methods are called for, and a simple graphical approach can be used to display the optimum solutions. Some parameters are open to future refinements as the optimization problem must be contemplated globally in

  8. Convergence behavior of multireference perturbation theory: Forced degeneracy and optimization partitioning applied to the beryllium atom

    NASA Astrophysics Data System (ADS)

    Finley, James P.; Chaudhuri, Rajat K.; Freed, Karl F.

    1996-07-01

    High-order multireference perturbation theory is applied to the 1S states of the beryllium atom using a reference (model) space composed of the \\|1s22s2> and the \\|1s22p2> configuration-state functions (CSF's), a system that is known to yield divergent expansions using Mo/ller-Plesset and Epstein-Nesbet partitioning methods. Computations of the eigenvalues are made through 40th order using forced degeneracy (FD) partitioning and the recently introduced optimization (OPT) partitioning. The former forces the 2s and 2p orbitals to be degenerate in zeroth order, while the latter chooses optimal zeroth-order energies of the (few) most important states. Our methodology employs simple models for understanding and suggesting remedies for unsuitable choices of reference spaces and partitioning methods. By examining a two-state model composed of only the \\|1s22p2> and \\|1s22s3s> states of the beryllium atom, it is demonstrated that the full computation with 1323 CSF's can converge only if the zeroth-order energy of the \\|1s22s3s> Rydberg state from the orthogonal space lies below the zeroth-order energy of the \\|1s22p2> CSF from the reference space. Thus convergence in this case requires a zeroth-order spectral overlap between the orthogonal and reference spaces. The FD partitioning is not capable of generating this type of spectral overlap and thus yields a divergent expansion. However, the expansion is actually asymptotically convergent, with divergent behavior not displayed until the 11th order because the \\|1s22s3s> Rydberg state is only weakly coupled with the \\|1s22p2> CSF and because these states are energetically well separated in zeroth order. The OPT partitioning chooses the correct zeroth-order energy ordering and thus yields a convergent expansion that is also very accurate in low orders compared to the exact solution within the basis.

  9. Structural approaches to spin glasses and optimization problems

    NASA Astrophysics Data System (ADS)

    de Sanctis, Luca

    We introduce the concept of Random Multi-Overlap Structure (RaMOSt) as a generalization of the one introduced by M. Aizenman et al. for non-diluted spin glasses. We use this concept to find generalized bounds for the free energy of the Viana-Bray model of diluted spin glasses and to formulate and prove the Extended Variational Principle that implicitly provides the free energy of the model. Then we exhibit a theorem for the limiting RaMOSt, analogous to the one found by F. Guerra for the Sherrington-Kirkpatrick model, that describes some stability properties of the model. We also show how our technique can be used to prove the existence of the thermodynamic limit of the free energy. We then propose an ultrametric breaking of replica symmetry for diluted spin glasses in the framework of Random Multi-Overlap Structures (RaMOSt). Such a proposal is closer to the Parisi theory for non-diluted spin glasses than the theory based on the iterative approach. Our approach allows to formulate an ansatz in which the Broken Replica Symmetry trial function depends on a set of numbers, over which one has to take the infimum (as opposed to a nested chain of probabilty distributions). Our scheme suggests that the order parameter is determined by the probability distribution of the multi-overlap in a similar sense as in the non-diluted case, and it is not necessarily a functional. Such results are then extended to the K-SAT and p-XOR-SAT optimization problems, and to the spherical mean field spin glass. The ultrametric structure exhibits a factorization property similar to the one of the optimal structures for the Viana-Bray model. The present work paves the way to a revisited Parisi theory for diluted spin systems. Moreover, it emphasizes some structural analogies among different models, which also seem to be plausible for models that still escape good mathematical control. This structural analysis seems quite promising both mathematically and physically.

  10. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  11. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  12. Optimizing denominator data estimation through a multimodel approach.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Leirs, Herwig; Hendrickx, Guy

    2014-05-01

    To assess the risk of (zoonotic) disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and "true" livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively). This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources. PMID:24893035

  13. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  14. Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact

    NASA Astrophysics Data System (ADS)

    Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo

    To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.

  15. Peak Capacity Optimization in Comprehensive Two Dimensional Liquid Chromatography: A Practical Approach

    PubMed Central

    Gu, Haiwei; Huang, Yuan; Carr, Peter W.

    2010-01-01

    In this work we develop a practical approach to optimization in comprehensive two dimensional liquid chromatography (LC×LC) which incorporates the important under-sampling correction and is based on the previously developed gradient implementation of the Poppe approach to optimizing peak capacity. The Poppe method allows the determination of the column length, flow rate as well as initial and final eluent compositions that maximize the peak capacity at a given gradient time. It was assumed that gradient elution is applied in both dimensions and that various practical constraints are imposed on both the initial and final mobile phase composition in the first dimension separation. It was convenient to consider four different classes of solute sets differing in their retention properties. The major finding of this study is that the under-sampling effect is very important and causes some unexpected results including the important counter-intuitive observation that under certain conditions the optimum effective LC×LC peak capacity is obtained when the first dimension is deliberately run under sub-optimal conditions. PMID:21145554

  16. A multiobjective optimization approach to the operation and investment of the national energy and transportation systems

    NASA Astrophysics Data System (ADS)

    Ibanez, Eduardo

    Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.

  17. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. PMID:26868846

  18. Towards an Optimal Multi-Method Paleointensity Approach

    NASA Astrophysics Data System (ADS)

    de Groot, L. V.; Biggin, A. J.; Langereis, C. G.; Dekkers, M. J.

    2014-12-01

    Our recently proposed 'multi-method paleointensity approach' consists of at least IZZI-Thellier, MSP-DSC and pseudo-Thellier experiments, complemented with Microwave Thellier experiments for key flows or ages. All results are scrutinized by strict selection criteria to accept only the most reliable paleointensities. This approach yielded reliable estimates of the paleofield for ~70% of all cooling units sampled on Hawaii - an exceptionally high number for a paleointensity study on lavas. Furthermore the credibility of the obtained results is greatly enhanced if more methods mutually agree with in their experimental uncertainties. To further assess the success rate of this new approach, we applied it to two collections of (sub-)recent lavas from Tenerife and Gran Canaria (20 cooling units), and Terceira (Azores, 18 cooling units). Although the mineralogy and rock-magnetic properties of much of these flows seemed less favorable for paleointensity techniques compared to the Hawaiian samples, again the multi-method paleointensity approach yielded reliable estimates for 60-70% of all cooling units. One of the methods, the newly calibrated pseudo-Thellier method, proved to be an important element of our new paleointensity approach yielding reliable estimates for ~50% of the Hawaiian lavas sampled. Its applicability to other volcanic edifices, however, remained questionable. The results from the Canarian and Azorean volcanic edifices provide further constraints on this method's potential. For lavas that are rock-magnetically (i.e. susceptibility-vs-temperature behavior) akin to Hawaiian lavas, the same selection criterion and calibration formula yielded successful results - testifying to the veracity of this new paleointensity method. Besides methodological advances our new record for the Canary Islands also has geomagnetic implications. It reveals a dramatic increase in the intensity of the Earth's magnetic field from ~1250 to ~720 BC, reaching a maximum VADM of ~125 ZAm

  19. A comparison of statistical approaches used for the optimization of soluble protein expression in Escherichia coli.

    PubMed

    Papaneophytou, Christos; Kontopidis, George

    2016-04-01

    During a discovery project of potential inhibitors for three proteins, TNF-α, RANKL and HO-1, implicated in the pathogenesis of rheumatoid arthritis, significant amounts of purified proteins were required. The application of statistically designed experiments for screening and optimization of induction conditions allows rapid identification of the important factors and interactions between them. We have previously used response surface methodology (RSM) for the optimization of soluble expression of TNF-α and RANKL. In this work, we initially applied RSM for the optimization of recombinant HO-1 and a 91% increase of protein production was achieved. Subsequently, we slightly modified a published incomplete factorial approach (called IF1) in order to evaluate the effect of three expression variables (bacterial strains, induction temperatures and culture media) on soluble expression levels of the three tested proteins. However, soluble expression yields of TNF-α and RANKL obtained by the IF1 method were significantly lower (<50%) than those obtained by RSM. We further modified the IF1 approach by replacing the culture media with induction times and the resulted method called IF-STT (Incomplete Factorial-Stain/Temperature/Time) was validated using the three proteins. Interestingly, soluble expression levels of the three proteins obtained by IF-STT were only 1.2-fold lower than those obtained by RSM. Although RSM is probably the best approach for optimization of biological processes, the IF-STT is faster, it examines the most important factors (bacterial strain, temperature and time) influencing protein soluble expression in a single experiment, and can be used in any recombinant protein expression project as a starting point. PMID:26721705

  20. Optimization of rifamycin B fermentation in shake flasks via a machine-learning-based approach.

    PubMed

    Bapat, Prashant M; Wangikar, Pramod P

    2004-04-20

    Rifamycin B is an important polyketide antibiotic used in the treatment of tuberculosis and leprosy. We present results on medium optimization for Rifamycin B production via a barbital insensitive mutant strain of Amycolatopsis mediterranei S699. Machine-learning approaches such as Genetic algorithm (GA), Neighborhood analysis (NA) and Decision Tree technique (DT) were explored for optimizing the medium composition. Genetic algorithm was applied as a global search algorithm while NA was used for a guided local search and to develop medium predictors. The fermentation medium for Rifamycin B consisted of nine components. A large number of distinct medium compositions are possible by variation of concentration of each component. This presents a large combinatorial search space. Optimization was achieved within five generations via GA as well as NA. These five generations consisted of 178 shake-flask experiments, which is a small fraction of the search space. We detected multiple optima in the form of 11 distinct medium combinations. These medium combinations provided over 600% improvement in Rifamycin B productivity. Genetic algorithm performed better in optimizing fermentation medium as compared to NA. The Decision Tree technique revealed the media-media interactions qualitatively in the form of sets of rules for medium composition that give high as well as low productivity. PMID:15052640

  1. On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts

    NASA Astrophysics Data System (ADS)

    Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal

    2011-05-01

    In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].

  2. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  3. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  4. Optimization of convective fin systems: a holistic approach

    NASA Astrophysics Data System (ADS)

    Sasikumar, M.; Balaji, C.

    A numerical analysis of natural convection heat transfer and entropy generation from an array of vertical fins, standing on a horizontal duct, with turbulent fluid flow inside, has been carried out. The analysis takes into account the variation of base temperature along the duct, traditionally ignored by most studies on such problems. One-dimensional fin equation is solved using a second order finite difference scheme for each of the fins in the system and this, in conjunction with the use of turbulent flow correlations for duct, is used to obtain the temperature distribution along the duct. The influence of the geometric and thermal parameters, which are normally employed in the design of a thermal system, has been studied. Correlations are developed for (i) the total heat transfer rate per unit mass of the fin system (ii) total entropy generation rate and (iii) fin height, as a function of the geometric parameters of the fin system. Optimal dimensions of the fin system for (i) maximum heat transfer rate per unit mass and (ii) minimum total entropy generation rate are obtained using Genetic Algorithm. As expected, these optima do not match. An approach to a `holistic' design that takes into account both these criteria has also been presented.

  5. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  6. Finite-basis correction applied to the optimized effective potential within the FLAPW method

    NASA Astrophysics Data System (ADS)

    Friedrich, Christoph; Betzinger, Markus; Blügel, Stefan

    2011-03-01

    The optimized-effective-potential (OEP) method is a special technique to construct local exchange-correlation (xc) potentials from general orbital-dependent xc energy functionals for density-functional theory. Recently, we showed that particular care must be taken to construct local potentials within the all-electron full-potential augmented-plane-wave (FLAPW) approach. In fact, we found that the LAPW basis had to be converged to an accuracy that was far beyond that in calculations using conventional functionals, leading to a very high computational cost. This could be traced back to the convergence behavior of the density response function: only a highly converged basis lends the density enough flexibility to react adequately to changes of the potential. In this work we derive a numerical correction for the response function, which vanishes in the limit of an infinite, complete basis. It is constructed in the atomic spheres from the response of the basis functions themselves to changes of the potential. We show that such a finite-basis correction reduces the computational demand of OEP calculations considerably. We also discuss a similar correction scheme for GW calculations.

  7. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  8. Improvement of remotely sensed vegetation coverage in heterogeneous environments with an optimal zoning approach

    NASA Astrophysics Data System (ADS)

    Li, Ru; Yue, Yuemin

    2015-08-01

    The high spatial heterogeneity forms a major uncertainty in accurately monitoring of vegetation coverage. In this study, an optimal zoning approach with dividing the whole heterogeneous image into relatively homogeneously segments was proposed to reduce the effects of high heterogeneity on vegetation coverage estimation. With the combination of the spectral similarity of the adjacent pixels and spatial autocorrelation of the segments, the optimal zoning approach accounted for the intrasegment uniformity and intersegment disparity of improved image segmentation. In comparison, vegetation coverage in the highly heterogeneous karst environments tended to be underestimated by the normalized difference vegetation index (NDVI) and overestimated by the normalized difference vegetation index-spectral mixture analysis (NDVI-SMA) model. Hence, when applying remote sensing for highly heterogeneous environments, the influence of high heterogeneity should not be ignored. Our study indicates that the proposed model, using NDVI-SMA model with improved segmentation, is found to ameliorate the effects of the highly heterogeneous environments on the extraction of vegetation coverage from hyperspectral imagery. The proposed approach is useful for obtaining accurate estimations of vegetation coverage in not only karst environments but also other environments with high heterogeneity.

  9. A Technical and Economic Optimization Approach to Exploring Offshore Renewable Energy Development in Hawaii

    SciTech Connect

    Larson, Kyle B.; Tagestad, Jerry D.; Perkins, Casey J.; Oster, Matthew R.; Warwick, M.; Geerlofs, Simon H.

    2015-09-01

    This study was conducted with the support of the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office (WWPTO) as part of ongoing efforts to minimize key risks and reduce the cost and time associated with permitting and deploying ocean renewable energy. The focus of the study was to discuss a possible approach to exploring scenarios for ocean renewable energy development in Hawaii that attempts to optimize future development based on technical, economic, and policy criteria. The goal of the study was not to identify potentially suitable or feasible locations for development, but to discuss how such an approach may be developed for a given offshore area. Hawaii was selected for this case study due to the complex nature of the energy climate there and DOE’s ongoing involvement to support marine spatial planning for the West Coast. Primary objectives of the study included 1) discussing the political and economic context for ocean renewable energy development in Hawaii, especially with respect to how inter-island transmission may affect the future of renewable energy development in Hawaii; 2) applying a Geographic Information System (GIS) approach that has been used to assess the technical suitability of offshore renewable energy technologies in Washington, Oregon, and California, to Hawaii’s offshore environment; and 3) formulate a mathematical model for exploring scenarios for ocean renewable energy development in Hawaii that seeks to optimize technical and economic suitability within the context of Hawaii’s existing energy policy and planning.

  10. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-02-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  11. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  12. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  13. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. PMID:27154739

  14. Micro-resonator loss computation using conformal transformation and active-lasing FDTD approach and applications to tangential/radial output waveguide optimization I: Analytical approach

    NASA Astrophysics Data System (ADS)

    Li, Xiangyu; Ou, Fang; Huang, Yingyan; Ho, Seng-Tiong

    2013-03-01

    Understanding the physics of the loss mechanism of optical microresonators is important for one to know how to use them optimally for various applications. In these three-paper series, we utilized both analytical method (Conformal Transformation Approach) and numerical method (Active-Lasing Finite-Difference Time-Domain method) to study the resonator loss and cavity quality "Q" factor and apply them to optimize the radial/tangential waveguide coupling design. Both approaches demonstrate good agreement in their common region of applicability. In Part I, we review and expand on the conformal transformation method to show how exact solution of radiation loss for the case of cylindrical micro-resonator under both TE and TM polarizations can be obtained. We show how the method can be extended to apply to microdisk case.

  15. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    PubMed

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-01

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation. PMID:26583559

  16. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  17. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  18. An effective approach to optimizing the parameters of complex thermal power plants

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Zharkov, P. V.; Epishkin, N. O.

    2016-03-01

    A new approach has been developed to solve the optimization problems of continuous parameters of thermal power plants. It is based on such organization of optimization, in which the solution of the system of equations describing thermal power plant, is achieved only at the endpoint of the optimization process. By the example of optimizing the parameters of a coal power unit for ultra-supercritical steam parameters, the efficiency of the proposed approach is demonstrated and compared with the previously used one, in which the system of equations was solved at each iteration of the optimization process.

  19. Simultaneous optimization by neuro-genetic approach for analysis of plant materials by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Nunes, Lidiane Cristina; da Silva, Gilmare Antônia; Trevizan, Lilian Cristina; Santos Júnior, Dario; Poppi, Ronei Jesus; Krug, Francisco José

    2009-06-01

    A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macro-nutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic. A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 µs integration time gate, 1.1 µs delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem.

  20. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  1. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  2. A Simultaneous Approach to Optimizing Treatment Assignments with Mastery Scores. Research Report 89-5.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…

  3. Clinical Evaluation of Direct Aperture Optimization When Applied to Head-And-Neck IMRT

    SciTech Connect

    Jones, Stephen Williams, Matthew

    2008-04-01

    Direct Machine Parameter Optimization (DMPO) is a leaf segmentation program released as an optional item of the Pinnacle planning system (Philips Radiation Oncology Systems, Milpitas, CA); it is based on the principles of direct aperture optimization where the size, shape, and weight of individual segments are optimized to produce an intensity modulated radiation treatment (IMRT) plan. In this study, we compare DMPO to the traditional method of IMRT planning, in which intensity maps are optimized prior to conversion into deliverable multileaf collimator (MLC) apertures, and we determine if there was any dosimetric improvement, treatment efficiency gain, or planning advantage provided by the use of DMPO. Eleven head-and-neck patients treated with IMRT had treatment plans generated using each optimization method. For each patient, the same planning parameters were used for each optimization method. All calculations were performed using Pinnacle version 7.6c software and treatments were delivered using a step-and-shoot IMRT method on a Varian 2100EX linear accelerator equipped with a 120-leaf Millennium MLC (Varian Medical Systems, Palo Alto, CA). Each plan was assessed based on the calculation time, a conformity index, the composite objective value used in the optimization, the number of segments, monitor units (MUs), and treatment time. The results showed DMPO to be superior to the traditional optimization method in all areas. Considerable advantages were observed in the dosimetric quality of DMPO plans, which also required 32% less time to calculate, 42% fewer MUs, and 35% fewer segments than the conventional optimization method. These reductions translated directly into a 29% decrease in treatment times. While considerable gains were observed in planning and treatment efficiency, they were specific to our institution, and the impact of direct aperture optimization on plan quality and workflow will be dependent on the planning parameters, planning system, and

  4. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  5. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  6. The Contribution of Applied Social Sciences to Obesity Stigma-Related Public Health Approaches

    PubMed Central

    Bombak, Andrea E.

    2014-01-01

    Obesity is viewed as a major public health concern, and obesity stigma is pervasive. Such marginalization renders obese persons a “special population.” Weight bias arises in part due to popular sources' attribution of obesity causation to individual lifestyle factors. This may not accurately reflect the experiences of obese individuals or their perspectives on health and quality of life. A powerful role may exist for applied social scientists, such as anthropologists or sociologists, in exploring the lived and embodied experiences of this largely discredited population. This novel research may aid in public health intervention planning. Through these studies, applied social scientists could help develop a nonstigmatizing, salutogenic approach to public health that accurately reflects the health priorities of all individuals. Such an approach would call upon applied social science's strengths in investigating the mundane, problematizing the “taken for granted” and developing emic (insiders') understandings of marginalized populations. PMID:24782921

  7. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  8. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  9. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  10. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  11. Efficient global optimization applied to wind tunnel evaluation-based optimization for improvement of flow control by plasma actuators

    NASA Astrophysics Data System (ADS)

    Kanazaki, Masahiro; Matsuno, Takashi; Maeda, Kengo; Kawazoe, Hiromitsu

    2015-09-01

    A kriging-based genetic algorithm called efficient global optimization (EGO) was employed to optimize the parameters for the operating conditions of plasma actuators. The aerodynamic performance was evaluated by wind tunnel testing to overcome the disadvantages of time-consuming numerical simulations. The proposed system was used on two design problems to design the power supply for a plasma actuator. The first case was the drag minimization problem around a semicircular cylinder. In this case, the inhibitory effect of flow separation was also observed. The second case was the lift maximization problem around a circular cylinder. This case was similar to the aerofoil design, because the circular cylinder has potential to work as an aerofoil owing to the control of the flow circulation by the plasma actuators with four design parameters. In this case, applicability to the multi-variant design problem was also investigated. Based on these results, optimum designs and global design information were obtained while drastically reducing the number of experiments required compared to a full factorial experiment.

  12. New approach for automatic recognition of melanoma in profilometry: optimized feature selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.

    1998-06-01

    A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.

  13. Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.

    2014-12-01

    A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.

  14. Successive equimarginal approach for optimal design of a pump and treat system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.

    2007-08-01

    An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.

  15. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  16. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  17. Optimal dosage of chlorhexidine digluconate in chemical plaque control when applied by the oral irrigator.

    PubMed

    Lang, N P; Ramseier-Grossmann, K

    1981-06-01

    Chlorhexidine digluconate for chemical plaque control was tested in different concentrations using a fractionated jet oral irrigator. The inhibition of plaque formation and the prevention of gingival inflammation were evaluated in a double-blind study. During a 10-day period of abstinence from any mechanical oral hygiene procedures, the pattern of plaque formation and gingivitis development under the influence of chemical plaque control was analyzed. As a positive control, one group rinsed twice daily with 30 ml of a 0.2% chlorhexidine solution while a group applying 600 ml of a placebo solution served as a negative control. Forty dental students and assistants with plaque-free dentitions and healthy gingival tissues were divided into four groups. After a 10-day period of no oral hygiene, a recovery period of 11 days with perfect oral hygiene was again instituted. This experiment was repeated three times so that a total of 10 concentrations in the irrigator, the control rinsing and the placebo control could be evaluated. Daily application of 600 ml of a 0.001% (6 mg), 0.0033% (20 mg), 0.005% (30 mg), 0.01% (60 mg), 0.02% (120 mg), 0.05% (300 mg) and 0.1% (600 mg) and 400 ml of a 0.015% (60 mg), twice 400 ml of a 0.015% (120 mg) and 400 ml of a 0.02% (80 mg) solution of chlorhexidine was tested. At the start of each experimental period (day 0), after 3, 7 and 10 days and 11 days following reassuming oral hygiene procedures, the plaque accumulations were determined using the Plaque Index System (Silness & Löe 1964) and the development of gingivitis was evaluated according to the criteria of the Gingival Index System (Löe & Silness 1963). The results suggested that one daily irrigator application of 400 ml of a 0.02% chlorhexidine solution was the optimal and lowest concentration and dose to be used for complete inhibition of dental plaque. PMID:6947985

  18. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes. PMID:26362231

  19. a Hybrid Approach of Neural Network with Particle Swarm Optimization for Tobacco Pests Prediction

    NASA Astrophysics Data System (ADS)

    Lv, Jiake; Wang, Xuan; Xie, Deti; Wei, Chaofu

    Forecasting pests emergence levels plays a significant role in regional crop planting and management. The accuracy, which is derived from the accuracy of the forecasting approach used, will determine the economics of the operation of the pests prediction. Conventional methods including time series, regression analysis or ARMA model entail exogenous input together with a number of assumptions. The use of neural networks has been shown to be a cost-effective technique. But their training, usually with back-propagation algorithm or other gradient algorithms, is featured with some drawbacks such as very slow convergence and easy entrapment in a local minimum. This paper presents a hybrid approach of neural network with particle swarm optimization for developing the accuracy of predictions. The approach is applied to forecast Alternaria alternate Keissl emergence level of the WuLong Country, one of the most important tobacco planting areas in Chongqing. Traditional ARMA model and BP neural network are investigated as comparison basis. The experimental results show that the proposed approach can achieve better prediction performance.

  20. A weighted optimization approach to time-of-flight sensor fusion.

    PubMed

    Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger

    2014-01-01

    Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture. PMID:24184728

  1. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-01

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  2. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  3. Evolutionary methods for multidisciplinary optimization applied to the design of UAV systems†

    NASA Astrophysics Data System (ADS)

    Gonzalez, L. F.; Periaux, J.; Damp, L.; Srinivas, K.

    2007-10-01

    The implementation and use of a framework in which engineering optimization problems can be analysed are described. In the first part, the foundations of the framework and the hierarchical asynchronous parallel multi-objective evolutionary algorithms (HAPMOEAs) are presented. These are based upon evolution strategies and incorporate the concepts of multi-objective optimization, hierarchical topology, asynchronous evaluation of candidate solutions, and parallel computing. The methodology is presented first and the potential of HAPMOEAs for solving multi-criteria optimization problems is demonstrated on test case problems of increasing difficulty. In the second part of the article several recent applications of multi-objective and multidisciplinary optimization (MO) are described. These illustrate the capabilities of the framework and methodology for the design of UAV and UCAV systems. The application presented deals with a two-objective (drag and weight) UAV wing plan-form optimization. The basic concepts are refined and more sophisticated software and design tools with low- and high-fidelity CFD and FEA models are introduced. Various features described in the text are used to meet the challenge in optimization presented by these test cases.

  4. A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH

    PubMed Central

    CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG

    2013-01-01

    We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536

  5. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum

  6. Oxidation of low calorific value gases -- Applying optimization techniques to combustor design

    SciTech Connect

    Gemmen, R.S.

    1998-07-01

    The design of an optimal air-staged combustor for the oxidation of a low calorific value gas mixture is presented. The focus is on the residual fuel emitted from the anode of a molten carbonate fuel-cell. Both experimental and numerical results are presented. The simplified numerical model considers a series of plug-flow-reactor sections, with the possible addition of a perfectly-stirred-reactor. The parameter used for optimization, Z, is the sum of fuel-component molar flow rates leaving a particular combustor section. An optimized air injection profile is one that minimizes Z for a given combustor length and inlet condition. Since a mathematical proof describing the significance of global interactions remains lacking, the numerical model employs both a Local optimization procedure and a Global optimization procedure. The sensitivity of Z to variations in the air injection profile and inlet temperature is also examined. The results show that oxidation of the anode exhaust gas is possible with low pollutant emissions.

  7. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  8. Demonstration of structural optimization applied to wind-tunnel model design

    NASA Astrophysics Data System (ADS)

    French, Mark; Kolonay, Raymond M.

    1992-10-01

    Results are presented which indicate that using structural optimization to design wind-tunnel models can result in a procedure that matches design stiffnesses well enough to be very useful in sizing the structures of aeroelastic models. The design procedure that is presented demonstrates that optimization can be useful in the design of aeroelastically scaled wind-tunnel models. The resulting structure effectively models an aeroelastically tailored composite wing with a simple aluminum beam structure, a structure that should be inexpensive to manufacture compared with a composite one.

  9. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  10. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimization method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  11. Using genomic prediction to characterize environments and optimize prediction accuracy in applied breeding data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Simulation and empirical studies of genomic selection (GS) show accuracies sufficient to generate rapid annual genetic gains. It also shifts the focus from the evaluation of lines to the evaluation of alleles. Consequently, new methods should be developed to optimize the use of large historic multi-...

  12. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  13. A new approach to optimization-based defibrillation.

    PubMed

    Muzdeka, S; Barbieri, E

    2001-01-01

    The purpose of this paper is to develop a new model for optimal cardiac defibrillation, based on simultaneous minimization of energy consumption and defibrillation time requirements. In order to generate optimal defibrillation waveforms that will accomplish the objective stated above, one parameter rho has been introduced as a part of the performance measure to weigh the relative importance of time and energy. All the results of this theoretical study have been obtained for the proposed model, under the assumption that cardiac tissue can be represented by a simple parallel resistor-capacitor circuit. It is well known from modern control theory that the selection of a numerical value of the weight factor is the matter of subjective judgment of a designer. However, it has been shown that defining a cost function can help in selecting a value for rho. Some results of the mathematical development of the algorithm and computer simulations will be included in the paper. PMID:11347410

  14. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  15. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  16. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  17. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  18. Bayesian approach for counting experiment statistics applied to a neutrino point source analysis

    NASA Astrophysics Data System (ADS)

    Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.

    2013-12-01

    In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.

  19. Quality by design approach for optimizing the formulation and physical properties of extemporaneously prepared orodispersible films.

    PubMed

    Visser, J Carolina; Dohmen, Willem M C; Hinrichs, Wouter L J; Breitkreutz, Jörg; Frijlink, Henderik W; Woerdenbag, Herman J

    2015-05-15

    The quality by design (QbD) approach was applied for optimizing the formulation of extemporaneously prepared orodispersible films (ODFs) using Design-Expert® Software. The starting formulation was based on earlier experiments and contained the film forming agents hypromellose and carbomer 974P and the plasticizer glycerol (Visser et al., 2015). Trometamol and disodium EDTA were added to stabilize the solution. To optimize this formulation a quality target product profile was established in which critical quality attributes (CQAs) such as mechanical properties and disintegration time were defined and quantified. As critical process parameters (CPP) that were evaluated for their effect on the CQAs the percentage of hypromellose and the percentage of glycerol as well as the drying time were chosen. Response surface methodology (RMS) was used to evaluate the effects of the CPPs on the CQAs of the final product. The main factor affecting tensile strength and Young's modulus was the percentage of glycerol. Elongation at break was mainly influenced by the drying temperature. Disintegration time was found to be sensitive to the percentage of hypromellose. From the results a design space could be created. As long as the formulation and process variables remain within this design space, a product is obtained with desired characteristics and that meets all set quality requirements. PMID:25746737

  20. Optimal robust control of drug delivery in cancer chemotherapy: a comparison between three control approaches.

    PubMed

    Moradi, Hamed; Vossoughi, Gholamreza; Salarieh, Hassan

    2013-10-01

    During the drug delivery process in chemotherapy, both of the cancer cells and normal healthy cells may be killed. In this paper, three mathematical cell-kill models including log-kill hypothesis, Norton-Simon hypothesis and Emax hypothesis are considered. Three control approaches including optimal linear regulation, nonlinear optimal control based on variation of extremals and H∞-robust control based on μ-synthesis are developed. An appropriate cost function is defined such that the amount of required drug is minimized while the tumor volume is reduced. For the first time, performance of the system is investigated and compared for three control strategies; applied on three nonlinear models of the process. In additions, their efficiency is compared in the presence of model parametric uncertainties. It is observed that in the presence of model uncertainties, controller designed based on variation of extremals is more efficient than the linear regulation controller. However, H∞-robust control is more efficient in improving robust performance of the uncertain models with faster tumor reduction and minimum drug usage. PMID:23891423

  1. Niching Methods: Speciation Theory Applied for Multi-modal Function Optimization

    NASA Astrophysics Data System (ADS)

    Shir, Ofer M.; Bäck, Thomas

    While contemporary Evolutionary Algorithms (EAs) excel in various types of optimizations, their generalization to speciational subpopulations is much needed upon their deployment to multi-modal landscapes, mainly due to the typical loss of population diversity. The resulting techniques, known as niching methods, are the main focus of this chapter, which will provide the motivation, pose the problem both from the biological as well as computational perspectives, and describe algorithmic solutions. Biologically inspired by organic speciation processes, and armed with real-world incentive to obtain multiple solutions for better decision making, we shall present here the application of certain bioprocesses to multi-modal function optimization, by means of a broad overview of the existing work in the field, as well as a detailed description of specific test cases.

  2. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  3. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  4. LARES: an artificial chemical process approach for optimization.

    PubMed

    Irizarry, Roberto

    2004-01-01

    This article introduces a new global optimization procedure called LARES. LARES is based on the concept of an artificial chemical process (ACP), a new paradigm which is described in this article. The algorithm's performance was studied using a test bed with a wide spectrum of problems including random multi-modal random problem generators, random LSAT problem generators with various degrees of epistasis, and a test bed of real-valued functions with different degrees of multi-modality, discontinuity and flatness. In all cases studied, LARES performed very well in terms of robustness and efficiency. PMID:15768524

  5. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  6. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  7. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    NASA Astrophysics Data System (ADS)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  8. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  9. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  10. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-01

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective. PMID:25969321

  11. Design and optimization of bilayered tablet of Hydrochlorothiazide using the Quality-by-Design approach

    PubMed Central

    Dholariya, Yatin N; Bansod, Yogesh B; Vora, Rahul M; Mittal, Sandeep S; Shirsat, Ajinath Eknath; Bhingare, Chandrashekhar L

    2014-01-01

    Aim: The aim of the present study is to develop an optimize bilayered tablet using Hydrochlorothiazide (HCTZ) as a model drug candidate using quality by design (QbD) approach. Introduction and Method: The bilayered tablet gives biphasic drug release through loading dose; prepared using croscarmellose sodium a superdisintegrant and maintenance dose using several viscosity grades of hydrophilic polymers. The fundamental principle of QbD is to demonstrate understanding and control of pharmaceutical processes so as to deliver high quality pharmaceutical products with wide opportunities for continuous improvement. Risk assessment was carried out and subsequently 22 factorial designs in duplicate was selected to carry out design of experimentation (DOE) for evaluating the interactions and effects of the design factors on critical quality attribute. The design space was obtained by applying DOE and multivariate analysis, so as to ensure desired disintegration time (DT) and drug release is achieved. Bilayered tablet were evaluated for hardness, thickness, friability, drug content uniformity and in vitro drug dissolution. Result: Optimized formulation obtained from the design space exhibits DT of around 70 s, while DR T95% (time required to release 95% of the drug) was about 720 min. Kinetic studies of formulations revealed that erosion is the predominant mechanism for drug release. Conclusion: From the obtained results; it was concluded that independent variables have a significant effect over the dependent responses, which can be deduced from half normal plots, pareto charts and surface response graphs. The predicted values matched well with the experimental values and the result demonstrates the feasibility of the design model in the development and optimization of HCTZ bilayered tablet. PMID:25006554

  12. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    PubMed

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery. PMID:26932466

  13. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues

  14. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  15. Thorough approach to measurement uncertainty analysis applied to immersed heat exchanger testing

    SciTech Connect

    Farrington, R B; Wells, C V

    1986-04-01

    This paper discusses the value of an uncertainty analysis, discusses how to determine measurement uncertainty, and then details the sources of error in instrument calibration, data acquisition, and data reduction for a particular experiment. Methods are discussed to determine both the systematic (or bias) error in an experiment as well as to determine the random (or precision) error in the experiment. The detailed analysis is applied to two sets of conditions in measuring the effectiveness of an immersed coil heat exchanger. It shows the value of such analysis as well as an approach to reduce overall measurement uncertainty and to improve the experiment. This paper outlines how to perform an uncertainty analysis and then provides a detailed example of how to apply the methods discussed in the paper. The authors hope this paper will encourage researchers and others to become more concerned with their measurement processes and to report measurement uncertainty with all of their test results.

  16. Learning About Dying and Living: An Applied Approach to End-of-Life Communication.

    PubMed

    Pagano, Michael P

    2016-08-01

    The purpose of this article is to expand on prior research in end-of-life communication and death and dying communication apprehension, by developing a unique course that utilizes a hospice setting and an applied, service-learning approach. Therefore, this essay describes and discusses both students' and my experiences over a 7-year period from 2008 through 2014. The courses taught during this time frame provided an opportunity to analyze students' responses, experiences, and discoveries across semesters/years and cocultures. This unique, 3-credit, 14-week, service-learning, end-of-life communication course was developed to provide an opportunity for students to learn the theories related to this field of study and to apply that knowledge through volunteer experiences via interactions with dying patients and their families. The 7 years of author's notes, plus the 91 students' electronically submitted three reflection essays each (273 total documents) across four courses/years, served as the data for this study. According to the students, verbally in class discussions and in numerous writing assignments, this course helped lower their death and dying communication apprehension and increased their willingness to interact with hospice patients and their families. Furthermore, the students' final research papers clearly demonstrated how utilizing a service-learning approach allowed them to apply classroom learnings and interactions with dying patients and their families at the hospice, to their analyses of end-of-life communication theories and behaviors. The results of these classes suggest that other, difficult topic courses (e.g., domestic violence, addiction, etc.) might benefit from a similar pedagogical approach. PMID:26789660

  17. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). PMID:27251892

  18. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  19. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  20. Experiences in applying optimization techniques to configurations for the Control Of Flexible Structures (COFS) Program

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1988-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general-purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used. The use of optimization in this activity allows investigation of numerous options (such as number of bays, material, minimum diagonal wall thicknesses) in a relatively short time. The procedure provides data for judgments on the effects of different options on the design.

  1. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  2. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  3. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    PubMed

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. PMID:25835791

  4. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  5. Optimal Diagnostic Approaches for Patients with Suspected Small Bowel Disease

    PubMed Central

    Kim, Jae Hyun; Moon, Won

    2016-01-01

    While the domain of gastrointestinal endoscopy has made great strides over the last several decades, endoscopic assessment of the small bowel continues to be challenging. Recently, with the development of new technology including video capsule endoscopy, device-assisted enteroscopy, and computed tomography/magnetic resonance enterography, a more thorough investigation of the small bowel is possible. In this article, we review the systematic approach for patients with suspected small bowel disease based on these advanced endoscopic and imaging systems. PMID:27334413

  6. Optical scatterometry with analytic approaches applied to periodic nano-arrays including anisotropic layers

    NASA Astrophysics Data System (ADS)

    Abdulhalim, I.

    2007-06-01

    Optical scatterometry is being used as a powerful technique for measurement of sub-wavelength periodic structures. It is based on measuring the scattered signal and solving the inverse scattering problem. For periodic nano-arrays with feature size less than 100nm, it is possible to simplify the electromagnetic simulations using the Rytov near quasi-static approximation valid for feature periods only few times less than the wavelength. This is shown to be adequate for the determination of the structure parameters from the zero order reflected or transmitted waves and their polarization or ellipsometric properties. The validity of this approach is applied to lamellar nano-scale grating photo-resist lines on Si substrate. Formulation for structures containing anisotropic multilayers is presented using the 4x4 matrix approach.

  7. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  8. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Astrophysics Data System (ADS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  9. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  10. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    NASA Astrophysics Data System (ADS)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  11. A digital approach for phase measurement applied to delta-t tuneup procedure

    SciTech Connect

    Aiello, G.

    1993-05-01

    Beam energy and phase in a Linac are important parameters to be measured in order to tune the machine. They can be calculated by the time of flight of a beam bunch over a known distance between two locations, and by comparing the phase of a cavity to the beam phase. The phase difference between two signals must be measured in both cases, in order to get the information required. The electronics to be used for this measurement must meet stringent requirements: high bandwidth, good accuracy and resolutim have always been a challenge for classical analog solutions. A digital approach has been investigated, which provides a good resolution, accuracy independent on the phase difference value, good repeatability and reliability. Numerical analysis have been performed, showing the system`s optimal performance and limitations. A prototype has been tested in the laboratory, which confirm the predicted performance, and proves the system`s feasibility.

  12. A digital approach for phase measurement applied to delta-t tuneup procedure

    SciTech Connect

    Aiello, G.

    1993-05-01

    Beam energy and phase in a Linac are important parameters to be measured in order to tune the machine. They can be calculated by the time of flight of a beam bunch over a known distance between two locations, and by comparing the phase of a cavity to the beam phase. The phase difference between two signals must be measured in both cases, in order to get the information required. The electronics to be used for this measurement must meet stringent requirements: high bandwidth, good accuracy and resolutim have always been a challenge for classical analog solutions. A digital approach has been investigated, which provides a good resolution, accuracy independent on the phase difference value, good repeatability and reliability. Numerical analysis have been performed, showing the system's optimal performance and limitations. A prototype has been tested in the laboratory, which confirm the predicted performance, and proves the system's feasibility.

  13. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359

  14. Optimization of the Operation of Green Buildings applying the Facility Management

    NASA Astrophysics Data System (ADS)

    Somorová, Viera

    2014-06-01

    Nowadays, in the field of civil engineering there exists an upward trend towards environmental sustainability. It relates mainly to the achievement of energy efficiency and also to the emission reduction throughout the whole life cycle of the building, i.e. in the course of its implementation, use and liquidation. These requirements are fulfilled, to a large extent, by green buildings. The characteristic feature of green buildings are primarily highly-sophisticated technical and technological equipments which are installed therein. The sophisticated systems of technological equipments need also the sophisticated management. From this point of view the facility management has all prerequisites to meet this requirement. The paper is aimed to define the facility management as an effective method which enables the optimization of the management of supporting activities by creating conditions for the optimum operation of green buildings viewed from the aspect of the environmental conditions

  15. Excited-State Geometry Optimization with the Density Matrix Renormalization Group, as Applied to Polyenes.

    PubMed

    Hu, Weifeng; Chan, Garnet Kin-Lic

    2015-07-14

    We describe and extend the formalism of state-specific analytic density matrix renormalization group (DMRG) energy gradients, first used by Liu et al. [J. Chem. Theor. Comput. 2013, 9, 4462]. We introduce a DMRG wave function maximum overlap following technique to facilitate state-specific DMRG excited-state optimization. Using DMRG configuration interaction (DMRG-CI) gradients, we relax the low-lying singlet states of a series of trans-polyenes up to C20H22. Using the relaxed excited-state geometries, as well as correlation functions, we elucidate the exciton, soliton, and bimagnon ("single-fission") character of the excited states, and find evidence for a planar conical intersection. PMID:26575737

  16. Applying Business Process Re-engineering Patterns to optimize WS-BPEL Workflows

    NASA Astrophysics Data System (ADS)

    Buys, Jonas; de Florio, Vincenzo; Blondia, Chris

    With the advent of XML-based SOA, WS-BPEL shortly turned out to become a widely accepted standard for modeling business processes. Though SOA is said to embrace the principle of business agility, BPEL process definitions are still manually crafted into their final executable version. While SOA has proven to be a giant leap forward in building flexible IT systems, this static BPEL workflow model is somewhat paradoxical to the need for real business agility and should be enhanced to better sustain continual process evolution. In this paper, we point out the potential of adding business intelligence with respect to business process re-engineering patterns to the system to allow for automatic business process optimization. Furthermore, we point out that BPR macro-rules could be implemented leveraging micro-techniques from computer science. We present some practical examples that illustrate the benefit of such adaptive process models and our preliminary findings.

  17. Optimization in multidimensional gas chromatography applying quantitative analysis via a stable isotope dilution assay.

    PubMed

    Schmarr, Hans-Georg; Slabizki, Petra; Legrum, Charlotte

    2013-08-01

    Trace level analyses in complex matrices benefit from heart-cut multidimensional gas chromatographic (MDGC) separations and quantification via a stable isotope dilution assay. Minimization of the potential transfer of co-eluting matrix compounds from the first dimension ((1)D) separation into the second dimension separation requests narrow cut-windows. Knowledge about the nature of the isotope effect in the separation of labeled and unlabeled compounds allows choosing conditions resulting in at best a co-elution situation in the (1)D separation. Since the isotope effect strongly depends on the interactions of the analytes with the stationary phase, an appropriate separation column polarity is mandatory for an isotopic co-elution. With 3-alkyl-2-methoxypyrazines and an ionic liquid stationary phase as an example, optimization of the MDGC method is demonstrated and critical aspects of narrow cut-window definition are discussed. PMID:23732869

  18. Optimization or Simulation? Comparison of approaches to reservoir operation on the Senegal River

    NASA Astrophysics Data System (ADS)

    Raso, Luciano; Bader, Jean-Claude; Pouget, Jean-Christophe; Malaterre, Pierre-Olivier

    2015-04-01

    Design of reservoir operation rules follows, traditionally, two approaches: optimization and simulation. In simulation, the analyst hypothesizes operation rules, and selects them by what-if analysis based on effects of model simulations on different objectives indicators. In optimization, the analyst selects operational objective indicators, finding operation rules as an output. Optimization rules guarantee optimality, but they often require further model simplification, and can be hard to communicate. Selecting the most proper approach depends on the system under analysis, and the analyst expertise and objectives. We present advantage and disadvantages of both approaches, and we test them for the Manantali reservoir operation rule design, on the Senegal River, West Africa. We compare their performance in attaining the system objectives. Objective indicators are defined a-priori, in order to quantify the system performance. Results from this application are not universally generalizable to the entire class, but they allow us to draw conclusions on this system, and to give further information on their application.

  19. A simulation-optimization approach to retrieve reservoir releasing strategies under the trade-off objectives considering flooding, sedimentation, turbidity and water supply during typhoons

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; You, G. J. Y.

    2014-12-01

    This study develops a simulation-optimization approach for retrieving optimal multi-layer reservoir conjunctive release strategies considering the natural hazards of sedimentation, turbidity and flooding during typhoon invasion. The purposes of the developed approach are: (1) to apply WASP-based fluid dynamic sediment concentration simulation model and the developed extracting method of ideal releasing practice to search the optimal initial solution for optimization; and (2) to construct the replacing sediment concentration simulation model which embedded in the optimization model. In this study, the optimization model is solved by tabu search, and the optimized releasing hydrograph is then used for construction of the decision model. This study applies Adaptive Network-based Fuzzy Inference System (ANFIS) and Real-time Recurrent Learning Neural Network (RTRLNN) as construction tool of the concentration simulation model for total suspended solids. This developed approach is applied to the Shihmen Reservoir basin, Taiwan. The assessment index of operational outcome of multi-purpose multi-layer conjunctive releasing are maximum sediment concentration at Yuan-Shan weir, sediment removed ratio, highest water level at Shan-Yin Bridge, and final water level in Shihmen reservoir. The analyzed and optimizing results shows the following: (1) The multi-layer releasing during the stages before flood coming and before peak flow possess high potential for flood detention and sedimentation control; and during the stages after peak flow, for turbidity control and storage; (2) The ability of error toleration and adaption of ANFIS is superior, so ANFIS-based sediment concentration simulation model surpass RTRLNN-based model on simulating the mechanism and characteristics of sediment transport; and (3) The developed approach can effectively and automatically retrieve the optimal multi-layer releasing strategies under the trade-off control between flooding, sedimentation, turbidity

  20. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-01-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  1. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  2. Optimization Approaches for Designing a Novel 4-Bit Reversible Comparator

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-gui; Zhang, Man-qun; Wu, Qian; Li, Yan-cheng

    2013-02-01

    Reversible logic is a new rapidly developed research field in recent years, which has been receiving much attention for calculating with minimizing the energy consumption. This paper constructs a 4×4 new reversible gate called ZRQ gate to build quantum adder and subtraction. Meanwhile, a novel 1-bit reversible comparator by using the proposed ZRQC module on the basis of ZRQ gate is proposed as the minimum number of reversible gates and quantum costs. In addition, this paper presents a novel 4-bit reversible comparator based on the 1-bit reversible comparator. One of the vital important for optimizing reversible logic is to design reversible logic circuits with the minimum number of parameters. The proposed reversible comparators in this paper can obtain superiority in terms of the number of reversible gates, input constants, garbage outputs, unit delays and quantum costs compared with the existed circuits. Finally, MATLAB simulation software is used to test and verify the correctness of the proposed 4-bit reversible comparator.

  3. Dynamic Range Size Analysis of Territorial Animals: An Optimality Approach.

    PubMed

    Tao, Yun; Börger, Luca; Hastings, Alan

    2016-10-01

    Home range sizes of territorial animals are often observed to vary periodically in response to seasonal changes in foraging opportunities. Here we develop the first mechanistic model focused on the temporal dynamics of home range expansion and contraction in territorial animals. We demonstrate how simple movement principles can lead to a rich suite of range size dynamics, by balancing foraging activity with defensive requirements and incorporating optimal behavioral rules into mechanistic home range analysis. Our heuristic model predicts three general temporal patterns that have been observed in empirical studies across multiple taxa. First, a positive correlation between age and territory quality promotes shrinking home ranges over an individual's lifetime, with maximal range size variability shortly before the adult stage. Second, poor sensory information, low population density, and large resource heterogeneity may all independently facilitate range size instability. Finally, aggregation behavior toward forage-rich areas helps produce divergent home range responses between individuals from different age classes. This model has broad applications for addressing important unknowns in animal space use, with potential applications also in conservation and health management strategies. PMID:27622879

  4. Precision and the approach to optimality in quantum annealing processors

    NASA Astrophysics Data System (ADS)

    Johnson, Mark W.

    The last few years have seen both a significant technological advance towards the practical application of, and a growing scientific interest in the underlying behaviour of quantum annealing (QA) algorithms. A series of commercially available QA processors, most recently the D-Wave 2XTM 1000 qubit processor, have provided a valuable platform for empirical study of QA at a non-trivial scale. From this it has become clear that misspecification of Hamiltonian parameters is an important performance consideration, both for the goal of studying the underlying physics of QA, as well as that of building a practical and useful QA processor. The empirical study of the physics of QA requires a way to look beyond Hamiltonian misspecification.Recently, a solver metric called 'time-to-target' was proposed as a way to compare quantum annealing processors to classical heuristic algorithms. This approach puts emphasis on analyzing a solver's short time approach to the ground state. In this presentation I will review the processor technology, based on superconducting flux qubits, and some of the known sources of error in Hamiltonian specification. I will then discuss recent advances in reducing Hamiltonian specification error, as well as review the time-to-target metric and empirical results analyzed in this way.

  5. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  6. New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach.

    PubMed

    Pallagi, Edina; Karimi, Keyhaneh; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2016-09-10

    The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent. The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper. Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery. PMID:27386791

  7. Academic Departmental Management: An Application of an Interactive Multicriterion Optimization Approach.

    ERIC Educational Resources Information Center

    Geoffrion, A. M.; And Others

    This paper presents the conceptual development and application of a new interactive approach for multicriterion optimization to the aggregate operating problem of an academic department. This approach provides a mechanism for assisting an administrator in determing resource allocation decisions and only requires local trade-off and preference…

  8. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  9. An analytical approach for gain optimization in multimode fiber Raman amplifiers.

    PubMed

    Zhou, Junhe

    2014-09-01

    In this paper, an analytical approach is proposed to minimize the mode dependent gain as well as the wavelength dependent gain for the multimode fiber Raman amplifiers (MFRAs). It is shown that the optimal power integrals at the corresponding modes and wavelengths can be obtained by the non-negative least square method (NNLSM). The corresponding input pump powers can be calculated afterwards using the shooting method. It is demonstrated that if the power overlap integrals are not wavelength dependent, the optimization can be further simplified by decomposing the optimization problem into two sub optimization problems, i.e. the optimization of the gain ripple with respect to the modes, and with respect to the wavelengths. The optimization results closely match the ones in recent publications. PMID:25321517

  10. An approach to structure/control simultaneous optimization for large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Onoda, Junjiro; Haftka, Raphael T.

    1987-01-01

    This paper presents an approach to the simultaneous optimal design of a structure and control system for large flexible spacecrafts based on realistic objective function and constraints. The weight or total cost of structure and control system is minimized subject to constraints on the magnitude of response to a given disturbance involving both rigid-body and elastic modes. A nested optimization technique is developed to solve the combined problem. As an example, simple beam-like spacecraft under a steady-state white-noise disturbance force is investigated and some results of optimization are presented. In the numerical examples, the stiffness distribution, location of controller, and control gains are optimized. Direct feedback control and linear quadratic optimal controls laws are used with both inertial and noninertial disturbing force. It is shown that the total cost is sensitive to the overall structural stiffness, so that simultaneous optimization of the structure and control system is indeed useful.

  11. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  12. Optimizing hereditary angioedema management through tailored treatment approaches.

    PubMed

    Nasr, Iman H; Manson, Ania L; Al Wahshi, Humaid A; Longhurst, Hilary J

    2016-01-01

    Hereditary angioedema (HAE) is a rare but serious and potentially life threatening autosomal dominant condition caused by low or dysfunctional C1 esterase inhibitor (C1-INH) or uncontrolled contact pathway activation. Symptoms are characterized by spontaneous, recurrent attacks of subcutaneous or submucosal swellings typically involving the face, tongue, larynx, extremities, genitalia or bowel. The prevalence of HAE is estimated to be 1:50,000 without known racial differences. It causes psychological stress as well as significant socioeconomic burden. Early treatment and prevention of attacks are associated with better patient outcome and lower socioeconomic burden. New treatments and a better evidence base for management are emerging which, together with a move from hospital-centered to patient-centered care, will enable individualized, tailored treatment approaches. PMID:26496459

  13. Code to Optimize Load Sharing of Split-Torque Transmissions Applied to the Comanche Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Most helicopters now in service have a transmission with a planetary design. Studies have shown that some helicopters would be lighter and more reliable if they had a transmission with a split-torque design instead. However, a split-torque design has never been used by a U.S. helicopter manufacturer because there has been no proven method to ensure equal sharing of the load among the multiple load paths. The Sikorsky/Boeing team has chosen to use a split-torque transmission for the U.S. Army's Comanche helicopter, and Sikorsky Aircraft is designing and manufacturing the transmission. To help reduce the technical risk of fielding this helicopter, NASA and the Army have done the research jointly in cooperation with Sikorsky Aircraft. A theory was developed that equal load sharing could be achieved by proper configuration of the geartrain, and a computer code was completed in-house at the NASA Lewis Research Center to calculate this optimal configuration.

  14. Experiences in applying optimization techniques to configurations for the Control of Flexible Structures (COFS) program

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1989-01-01

    Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.

  15. Factorial design applied to the optimization of lipid composition of topical antiherpetic nanoemulsions containing isoflavone genistein

    PubMed Central

    Argenta, Débora Fretes; de Mattos, Cristiane Bastos; Misturini, Fabíola Dallarosa; Koester, Leticia Scherer; Bassani, Valquiria Linck; Simões, Cláudia Maria Oliveira; Teixeira, Helder Ferreira

    2014-01-01

    The aim of this study was to optimize topical nanoemulsions containing genistein, by means of a 23 full factorial design based on physicochemical properties and skin retention. The experimental arrangement was constructed using oil type (isopropyl myristate or castor oil), phospholipid type (distearoylphosphatidylcholine [DSPC] or dioleylphosphaditylcholine [DOPC]), and ionic cosurfactant type (oleic acid or oleylamine) as independent variables. The analysis of variance showed effect of third order for particle size, polydispersity index, and skin retention of genistein. Nanoemulsions composed of isopropyl myristate/DOPC/oleylamine showed the smallest diameter and highest genistein amount in porcine ear skin whereas the formulation composed of isopropyl myristate/DSPC/oleylamine exhibited the lowest polydispersity index. Thus, these two formulations were selected for further studies. The formulations presented positive ζ potential values (>25 mV) and genistein content close to 100% (at 1 mg/mL). The incorporation of genistein in nanoemulsions significantly increased the retention of this isoflavone in epidermis and dermis, especially when the formulation composed by isopropyl myristate/DOPC/oleylamine was used. These results were supported by confocal images. Such formulations exhibited antiherpetic activity in vitro against herpes simplex virus 1 (strain KOS) and herpes simplex virus 22 (strain 333). Taken together, the results show that the genistein-loaded nanoemulsions developed in this study are promising options in herpes treatment. PMID:25336951

  16. Current experience with applying the GRADE approach to public health interventions: an empirical study

    PubMed Central

    2013-01-01

    Background The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach has been adopted by many national and international organisations as a systematic and transparent framework for evidence-based guideline development. With reference to an ongoing debate in the literature and within public health organisations, this study reviews current experience with the GRADE approach in rating the quality of evidence in the field of public health and identifies challenges encountered. Methods We conducted semi-structured interviews with individuals/groups that have applied the GRADE approach in the context of systematic reviews or guidelines in the field of public health, as well as with representatives of groups or organisations that actively decided against its use. We initially contacted potential participants by email. Responses were obtained by telephone interview or email, and written interview summaries were validated with participants. We analysed data across individual interviews to distil common themes and challenges. Results Based on 25 responses, we undertook 18 interviews and obtained 15 in-depth responses relating to specific systematic reviews or guideline projects; a majority of the latter were contributed by groups within the World Health Organization. All respondents that have used the GRADE approach appreciated the systematic and transparent process of assessing the quality of the evidence. However, respondents reported a range of minor and major challenges relating to complexity of public health interventions, choice of outcomes and outcome measures, ability to discriminate between different types of observational studies, use of non-epidemiological evidence, GRADE terminology and the GRADE and guideline development process. Respondents’ suggestions to make the approach more applicable to public health interventions included revisiting terminology, offering better guidance on how to apply GRADE to complex interventions and

  17. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites. PMID:27438233

  18. A practical approach for applying best practices in behavioural interventions to injury prevention

    PubMed Central

    Jacobsohn, Lela

    2010-01-01

    Behavioural science when combined with engineering, epidemiology and other disciplines creates a full picture of the often fragmented injury puzzle and informs comprehensive solutions. To assist efforts to include behavioural science in injury prevention strategies, this paper presents a methodological tutorial that aims to introduce best practices in behavioural intervention development and testing to injury professionals new to behavioural science. This tutorial attempts to bridge research to practice through the presentation of a practical, systematic, six-step approach that borrows from established frameworks in health promotion and disease prevention. Central to the approach is the creation of a programme theory that links a theoretically grounded, empirically tested behaviour change model to intervention components and their evaluation. Serving as a compass, a programme theory allows for systematic focusing of resources on the likely most potent behavioural intervention components and directs evaluation of intervention impact and implementation. For illustration, the six-step approach is applied to the creation of a new peer-to-peer campaign, Ride Like a Friend/Drive Like You Care, to promote safe teen driver and passenger behaviours. PMID:20363817

  19. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter. PMID:24768082

  20. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. PMID:26117286

  1. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  2. Applying operations research to optimize a novel population management system for cancer screening

    PubMed Central

    Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J

    2014-01-01

    Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318

  3. An expert system approach to the optimal design of single-junction and multijunction tandem solar cells

    SciTech Connect

    Yeh, C.S.

    1988-01-01

    The use of an expert system approach to the optimal design of single-junction and multijunction solar cells is a potential new design tool in photovoltaics. This study presents results of a comprehensive study of this new design method. To facilitate the realistic optimal design of the two-terminal monolithic single-junction and multijunction tandem solar cells, a rule-based system was established by adopting the experimental data and/or semi-empirical formulae used today for those design parameters. A numerical simulation based on the displacement damage theory was carried out to study the degradation of AlGaAs/GaAs solar cells after proton or electron irradiation. The damage constant of the minority-carrier diffusion length, an important design parameter of a solar cell for space application, was calculated. An efficient Box complex optimization technique with minor modifications is analyzed and applied to accelerate the convergence rate of the algorithm. Design rules were implemented in order to reduce the search space of the optimal design and to make a compromise in the tradeoff between the conflicting criteria for selection. Realistic optimal design of these solar cells were obtained and verified from the expert system and then compared with state-of-the-art technology.

  4. A multiobjective ant colony optimization approach for scheduling environmental flow management alternatives with application to the River Murray, Australia

    NASA Astrophysics Data System (ADS)

    Szemis, J. M.; Dandy, G. C.; Maier, H. R.

    2013-10-01

    In regulated river systems, such as the River Murray in Australia, the efficient use of water to preserve and restore biota in the river, wetlands, and floodplains is of concern for water managers. Available management options include the timing of river flow releases and operation of wetland flow control structures. However, the optimal scheduling of these environmental flow management alternatives is a difficult task, since there are generally multiple wetlands and floodplains with a range of species, as well as a large number of management options that need to be considered. Consequently, this problem is a multiobjective optimization problem aimed at maximizing ecological benefit while minimizing water allocations within the infrastructure constraints of the system under consideration. This paper presents a multiobjective optimization framework, which is based on a multiobjective ant colony optimization approach, for developing optimal trade-offs between water allocation and ecological benefit. The framework is applied to a reach of the River Murray in South Australia. Two studies are formulated to assess the impact of (i) upstream system flow constraints and (ii) additional regulators on this trade-off. The results indicate that unless the system flow constraints are relaxed, there is limited additional ecological benefit as allocation increases. Furthermore the use of regulators can increase ecological benefits while using less water. The results illustrate the utility of the framework since the impact of flow control infrastructure on the trade-offs between water allocation and ecological benefit can be investigated, thereby providing valuable insight to managers.

  5. Geometry Control System for Exploratory Shape Optimization Applied to High-Fidelity Aerodynamic Design of Unconventional Aircraft

    NASA Astrophysics Data System (ADS)

    Gagnon, Hugo

    This thesis represents a step forward to bring geometry parameterization and control on par with the disciplinary analyses involved in shape optimization, particularly high-fidelity aerodynamic shape optimization. Central to the proposed methodology is the non-uniform rational B-spline, used here to develop a new geometry generator and geometry control system applicable to the aerodynamic design of both conventional and unconventional aircraft. The geometry generator adopts a component-based approach, where any number of predefined but modifiable (parametric) wing, fuselage, junction, etc., components can be arbitrarily assembled to generate the outer mold line of aircraft geometry. A unique Python-based user interface incorporating an interactive OpenGL windowing system is proposed. Together, these tools allow for the generation of high-quality, C2 continuous (or higher), and customized aircraft geometry with fast turnaround. The geometry control system tightly integrates shape parameterization with volume mesh movement using a two-level free-form deformation approach. The framework is augmented with axial curves, which are shown to be flexible and efficient at parameterizing wing systems of arbitrary topology. A key aspect of this methodology is that very large shape deformations can be achieved with only a few, intuitive control parameters. Shape deformation consumes a few tenths of a second on a single processor and surface sensitivities are machine accurate. The geometry control system is implemented within an existing aerodynamic optimizer comprising a flow solver for the Euler equations and a sequential quadratic programming optimizer. Gradients are evaluated exactly with discrete-adjoint variables. The algorithm is first validated by recovering an elliptical lift distribution on a rectangular wing, and then demonstrated through the exploratory shape optimization of a three-pronged feathered winglet leading to a span efficiency of 1.22 under a height

  6. A sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders.

    PubMed

    Torri, L; Piochi, M; Marchiani, R; Zeppa, G; Dinnella, C; Monteleone, E

    2016-01-01

    The present study aimed to present a sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders (GSP). The combined sensory evaluation approach, involving a descriptive and an affective test, respectively, was applied to evaluate the effect of the addition of grape skin powders from 2 grape varieties (Barbera and Chardonnay) at different levels [0.8, 1.6, and 2.4%; weight (wt) powder/wt curd] on the sensory properties and consumer acceptability of innovative soft cow milk cheeses. The experimental plan envisaged 7 products, 6 fortified prototypes (at rates of Barbera and Chardonnay of 0.8, 1.6, and 2.4%) and a control sample, with 1 wk of ripening. By means of a free choice profile, 21 cheese experts described the sensory properties of prototypes. A central location test with 90 consumers was subsequently conducted to assess the acceptability of samples. The GSP enrichment strongly affected the sensory properties of innovative products, mainly in terms of appearance and texture. Fortified samples were typically described with a marbling aspect (violet or brown as function of the grape variety) and with an increased granularity, sourness, saltiness, and astringency. The fortification also contributed certain vegetable sensations perceived at low intensity (grassy, cereal, nuts), and some potential negative sensations (earthy, animal, winy, varnish). The white color, the homogeneous dough, the compact and elastic texture, and the presence of lactic flavors resulted the positive drivers of preference. On the contrary, the marbling aspect, granularity, sandiness, sourness, saltiness, and astringency negatively affected the cheese acceptability for amounts of powder, exceeding 0.8 and 1.6% for the Barbera and Chardonnay prototypes, respectively. Therefore, the amount of powder resulted a critical parameter for liking of fortified cheeses and a discriminant between the 2 varieties. Reducing the GSP particle size and improving the GSP

  7. Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach

    NASA Technical Reports Server (NTRS)

    Jacobs, Derya, A.; Aasen, Curtis A.

    2000-01-01

    Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the

  8. A decomposition approach for optimal management of groundwater resources and irrigated agriculture in arid coastal regions

    NASA Astrophysics Data System (ADS)

    Grundmann, Jens; Schütze, Niels; Heck, Vera

    2013-04-01

    For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system, it unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both, water quality and water quantity of a strongly coupled groundwater-agriculture system. However, such systems are characterized by a large number of decision variables if abstraction schemes, cropping patterns and cultivated acreages are optimised simultaneously for multiple years. Therefore, we apply the principle of decomposition to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for a faster and more reliable solution. At first, within an inner optimisation loop, cropping patterns and cultivated acreages are optimised to achieve a most profitable agricultural production for a given amount of water. Thereby, the behaviour of farms is described by crop-water-production functions which can be derived analytically. Secondly, within an outer optimisation loop, a simulation based optimisation is performed to find optimal groundwater abstraction pattern by coupling an evolutionary optimisation algorithm with an artificial neural network for modelling the aquifer response, inclusive the seawater interface. We demonstrate the decomposition approach by an exemplary application of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. We show the effectiveness of our methodology for the evaluation

  9. Applying the Principles of Systems Engineering and Project Management to Optimize Scientific Research

    NASA Astrophysics Data System (ADS)

    Peterkin, Adria J.

    2016-01-01

    Systems Engineering is an interdisciplinary practice that analyzes different facets of a suggested area to properly develop and design an efficient system guided by the principles and restrictions of the science community. When entering an institution with quantitative and analytical scientific theory it is important to make sure that all parts of a system correlates in a structured and systematic manner so that all areas of intricacy will be prevented or quickly deduced. My research focused on interpreting and implementing Systems Engineering techniques in the construction, integration and operation of a NASA Radio Jove Kit to Observe Jupiter radio emissions. Jupiter emissions read at very low frequencies so when building the telescope it had to be able to read less than 39.5 MHz. The projected outcome was to receive long L-bursts and short S-burts signals; however, during the time of observation Jupiter was in conjunction with the Sun. We then decided to use the receiver built from the NASA Radio Jove Kit to hook it up to the Karl Jansky telescope to make an effort to listen to solar flares as well, nonetheless, we were unable to identify these signals and further realized they were noise. The overall project was a success in that we were able to apply and comprehend, the principles of Systems Engineering to facilitate the build.

  10. An approach to the multi-axis problem in manual control. [optimal pilot model

    NASA Technical Reports Server (NTRS)

    Harrington, W. W.

    1977-01-01

    The multiaxis control problem is addressed within the context of the optimal pilot model. The problem is developed to provide efficient adaptation of the optimal pilot model to complex aircraft systems and real world, multiaxis tasks. This is accomplished by establishing separability of the longitudinal and lateral control problems subject to the constraints of multiaxis attention and control allocation. Control solution adaptation to the constrained single axis attention allocations is provided by an optimal control frequency response algorithm. An algorithm is developed to solve the multiaxis control problem. The algorithm is then applied to an attitude hold task for a bare airframe fighter aircraft case with interesting multiaxis properties.

  11. A graph-based ant colony optimization approach for process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  12. Medium optimization of protease production by Brevibacterium linens DSM 20158, using statistical approach

    PubMed Central

    Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H.M.

    2012-01-01

    Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett–Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium. PMID:24031928

  13. A Graph-Based Ant Colony Optimization Approach for Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  14. Push-through direct injection NMR: an optimized automation method applied to metabolomics.

    PubMed

    Teng, Quincy; Ekman, Drew R; Huang, Wenlin; Collette, Timothy W

    2012-05-01

    There is a pressing need to increase the throughput of NMR analysis in fields such as metabolomics and drug discovery. Direct injection (DI) NMR automation is recognized to have the potential to meet this need due to its suitability for integration with the 96-well plate format. However, DI NMR has not been widely used as a result of some insurmountable technical problems; namely: carryover contamination, sample diffusion (causing reduction of spectral sensitivity), and line broadening caused by entrapped air bubbles. Several variants of DI NMR, such as flow injection analysis (FIA) and microflow NMR, have been proposed to address one or more of these issues, but not all of them. The push-through direct injection technique reported here overcomes all of these problems. The method recovers samples after NMR analysis, uses a "brush-wash" routine to eliminate carryover, includes a procedure to push wash solvent out of the flow cell via the outlet to prevent sample diffusion, and employs an injection valve to avoid air bubbles. Herein, we demonstrate the robustness, efficiency, and lack of carryover characteristics of this new method, which is ideally suited for relatively high throughput analysis of the complex biological tissue extracts used in metabolomics, as well as many other sample types. While simple in concept and setup, this new method provides a substantial improvement over current approaches. PMID:22434060

  15. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    PubMed Central

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  16. Optimal Flight for Ground Noise Reduction in Helicopter’s Landing Approach

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi; Ishii, Hirokazu; Uchida, Junichi; Gomi, Hiromi; Matayoshi, Naoki; Okuno, Yoshinori

    This study aims to obtain the optimal flights of a helicopter that reduce ground noise in its landing approach with an optimization technique and to conduct flight tests for confirming the effectiveness of the optimal solutions. Past experiments of JAXA (Japan Aerospace Exploration Agency) shows the noise of the helicopter varies significantly according to its flight conditions, especially depending on the flight path angle. We therefore build a simple noise model of the helicopter, in which the level of the noise generated from a point sound source is a function only of the flight path angle. By using equations of motion for flight in a vertical plane, we define optimal control problems for minimizing noise levels measured at points on the ground surface, and obtain optimal controls for specified initial altitudes, flight constraints, and wind conditions. The obtained optimal flights avoid the flight path angle which generates the large noise and decrease the flight time, which are different from the conventional flight. Finally, we verify the validity of the optimal flight patterns by the flight experiments. The actual flights following the optimal ones also result in the noise reduction, which shows the effectiveness of the optimization.

  17. A new dynamic approach for statistical optimization of GNSS radio occultation bending angles for optimal climate monitoring utility

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Wu, S.; Schwaerz, M.; Fritzer, J.; Zhang, S.; Carter, B. A.; Zhang, K.

    2013-12-01

    Navigation Satellite System (GNSS)-based radio occultation (RO) is a satellite remote sensing technique providing accurate profiles of the Earth's atmosphere for weather and climate applications. Above about 30 km altitude, however, statistical optimization is a critical process for initializing the RO bending angles in order to optimize the climate monitoring utility of the retrieved atmospheric profiles. Here we introduce an advanced dynamic statistical optimization algorithm, which uses bending angles from multiple days of European Centre for Medium-range Weather Forecasts (ECMWF) short-range forecast and analysis fields, together with averaged-observed bending angles, to obtain background profiles and associated error covariance matrices with geographically varying background uncertainty estimates on a daily updated basis. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.4 (OPSv5.4) algorithm, using several days of simulated MetOp and observed CHAMP and COSMIC data, for January and July conditions. We find the following for the new method's performance compared to OPSv5.4: 1.) it significantly reduces random errors (standard deviations), down to about half their size, and leaves less or about equal residual systematic errors (biases) in the optimized bending angles; 2.) the dynamic (daily) estimate of the background error correlation matrix alone already improves the optimized bending angles; 3.) the subsequently retrieved refractivity profiles and atmospheric (temperature) profiles benefit by improved error characteristics, especially above about 30 km. Based on these encouraging results, we work to employ similar dynamic error covariance estimation also for the observed bending angles and to apply the method to full months and subsequently to entire climate data records.

  18. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  19. A 'cheap' optimal control approach to estimate muscle forces in musculoskeletal systems.

    PubMed

    Menegaldo, Luciano Luporini; de Toledo Fleury, Agenor; Weber, Hans Ingo

    2006-01-01

    This paper shows a new method to estimate the muscle forces in musculoskeletal systems based on the inverse dynamics of a multi-body system associated optimal control. The redundant actuator problem is solved by minimizing a time-integral cost function, augmented with a torque-tracking error function, and muscle dynamics is considered through differential constraints. The method is compared to a previously implemented human posture control problem, solved using a Forward Dynamics Optimal Control approach and to classical static optimization, with two different objective functions. The new method provides very similar muscle force patterns when compared to the forward dynamics solution, but the computational cost is much smaller and the numerical robustness is increased. The results achieved suggest that this method is more accurate for the muscle force predictions when compared to static optimization, and can be used as a numerically 'cheap' alternative to the forward dynamics and optimal control in some applications. PMID:16033695

  20. Mathematic simulation of soil-vegetation condition and land use structure applying basin approach

    NASA Astrophysics Data System (ADS)

    Mishchenko, Natalia; Shirkin, Leonid; Krasnoshchekov, Alexey

    2016-04-01

    Ecosystems anthropogenic transformation is basically connected to the changes of land use structure and human impact on soil fertility. The Research objective is to simulate the stationary state of river basins ecosystems. Materials and Methods. Basin approach has been applied in the research. Small rivers basins of the Klyazma river have been chosen as our research objects. They are situated in the central part of the Russian plain. The analysis is carried out applying integrated characteristics of ecosystems functioning and mathematic simulation methods. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Results. Mathematic simulation resulted in defining possible permanent conditions of "phytocenosis-soil" system in coordinates of phytomass, phytoproductivity, humus percentage in soil. Ecosystem productivity is determined not only by vegetation photosynthesis activity but also by the area ratio of forest and meadow phytocenosis. Local maximums attached to certain phytomass areas and humus content in soil have been defined on the basin phytoproductivity distribution diagram. We explain the local maximum by synergetic effect. It appears with the definite ratio of forest and meadow phytocenosis. In this case, utmost values of phytomass for the whole area are higher than just a sum of utmost values of phytomass for the forest and meadow phytocenosis. Efficient correlation of natural forest and meadow phytocenosis has been defined for the Klyazma river. Conclusion. Mathematic simulation methods assist in forecasting the ecosystem conditions under various changes of land use structure. Nowadays overgrowing of the abandoned agricultural lands is very actual for the Russian Federation. Simulation results demonstrate that natural ratio of forest and meadow phytocenosis for the area will restore during agricultural overgrowing.

  1. Optimization of magnetosome production by Acidithiobacillus ferrooxidans using desirability function approach.

    PubMed

    Yan, Lei; Zhang, Shuang; Liu, Hetao; Wang, Weidong; Chen, Peng; Li, Hongyu

    2016-02-01

    Present study aimed to resolve the conflict between cell growth and magnetosome formation of Acidithiobacillus ferrooxidans (A. ferrooxidans) in batch experiments by applying response surface methodology (RSM) integrated a desirability function approach. The effects of several operating parameters on cell growth (OD600) and magnetosome production (Cmag) were evaluated. The maximum overall desirability (D) of 0.923 was achieved at iron concentration of 125.07mM, shake speed of 122.37rpm and nitrogen concentration of 2.40g/L. Correspondingly, the OD600 and Cmag were 0.522 and 1.196, respectively. The confirmation experiment confirmed that the optimum OD600 and Cmag obtained were in good agreement with the predicted values. The inductively coupled plasma atomic emission spectrometer (ICP-AES) and transmission electron microscopy (TEM) analyses revealed that the production of magnetosomes could be improved via optimization. X-ray diffraction (XRD) showed the magnetosomes are magnetite. Results indicated that RSM with a desirability function was a useful technique to get the maximum OD600 and Cmag simultaneously. PMID:26652427

  2. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in

  3. Biologically optimized helium ion plans: calculation approach and its in vitro validation

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.

    2016-06-01

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.

  4. Biologically optimized helium ion plans: calculation approach and its in vitro validation.

    PubMed

    Mairani, A; Dokic, I; Magro, G; Tessonnier, T; Kamp, F; Carlson, D J; Ciocca, M; Cerutti, F; Sala, P R; Ferrari, A; Böhlen, T T; Jäkel, O; Parodi, K; Debus, J; Abdollahi, A; Haberer, T

    2016-06-01

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary (4)He ion beams, of the secondary (3)He and (4)He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by [Formula: see text] Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ([Formula: see text]) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. [Formula: see text] values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy. PMID:27203864

  5. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    PubMed

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

  6. Quantum Operator Approach Applied to the Position-Dependent Mass Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Ovando, G.; Peña, J. J.; Morales, J.

    2014-03-01

    In this work, the quantum operator approach is applied to both, the position-dependent mass Schrödinger equation (PDMSE) and the Schrodinger equation with constant mass (CMSE). This fact enable us to find the factorization operators that relates both Hamiltonians by means of a kinetic energy operator that comes from the proposal of Morrow and Brownstein. With this approach is possible to find the exactly-solvable PDMSE, for any value of the parameters α and γ in the von Roos's Hamiltonian. For that, our proposal can be considered as a unified treatment of the PDMSE because it contains as particular cases, the kinetic energy operators of various authors such as BenDaniel-Duke, Gora-Williams, Zhu-Kroemer and Li-Kuhn among others. To show the usefulness of our result, we show the solvable PDMSE that comes from the harmonic oscillator potential model for the CMSE. The proposal is general and can easily be extended to other potential models and mass distributions which will be given in the extended paper.

  7. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    PubMed Central

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina’s HiSeq and MiSeq, Roche’s 454 GS FLX, and Life Technologies’ Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

  8. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  9. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    -and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.

  10. Time-optimal three-axis reorientation of asymmetric rigid spacecraft via homotopic approach

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-05-01

    This paper investigates the time-optimal rest-to-rest three-axis reorientation of asymmetric rigid spacecraft. First, time-optimal solutions for the inertially symmetric rigid spacecraft (ISRS) three-axis reorientation are briefly reviewed. By utilizing initial costates and reorientation time of the ISRS time-optimal solution, the homotopic approach is introduced to solve the asymmetric rigid spacecraft time-optimal three-axis reorientation problem. The main merit is that the homotopic approach can start automatically and reliably, which would facilitate the real-time generation of open-loop time-optimal solutions for attitude slewing maneuvers. Finally, numerical examples are given to illustrate the performance of the proposed method. For principle axis reorientation, numerical results and analytical derivations show that, multiple time-optimal solutions exist and relations between them are given. For generic reorientation problem, though mathematical rigorous proof is not available to date, numerical results also indicated the existing of multiple time-optimal solutions.

  11. A Kriging surrogate model coupled in simulation-optimization approach for identifying release history of groundwater sources

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Lu, Wenxi; Xiao, Chuanning

    2016-02-01

    As the incidence frequency of groundwater pollution increases, many methods that identify source characteristics of pollutants are being developed. In this study, a simulation-optimization approach was applied to determine the duration and magnitude of pollutant sources. Such problems are time consuming because thousands of simulation models are required to run the optimization model. To address this challenge, the Kriging surrogate model was proposed to increase computational efficiency. Accuracy, time consumption, and the robustness of the Kriging model were tested on both homogenous and non-uniform media, as well as steady-state and transient flow and transport conditions. The results of three hypothetical cases demonstrate that the Kriging model has the ability to solve groundwater contaminant source problems that could occur during field site source identification problems with a high degree of accuracy and short computation times and is thus very robust.

  12. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    PubMed

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others. PMID:24969690

  13. Maximum Caliber: a variational approach applied to two-state dynamics.

    PubMed

    Stock, Gerhard; Ghosh, Kingshuk; Dill, Ken A

    2008-05-21

    We show how to apply a general theoretical approach to nonequilibrium statistical mechanics, called Maximum Caliber, originally suggested by E. T. Jaynes [Annu. Rev. Phys. Chem. 31, 579 (1980)], to a problem of two-state dynamics. Maximum Caliber is a variational principle for dynamics in the same spirit that Maximum Entropy is a variational principle for equilibrium statistical mechanics. The central idea is to compute a dynamical partition function, a sum of weights over all microscopic paths, rather than over microstates. We illustrate the method on the simple problem of two-state dynamics, A<-->B, first for a single particle, then for M particles. Maximum Caliber gives a unified framework for deriving all the relevant dynamical properties, including the microtrajectories and all the moments of the time-dependent probability density. While it can readily be used to derive the traditional master equation and the Langevin results, it goes beyond them in also giving trajectory information. For example, we derive the Langevin noise distribution rather than assuming it. As a general approach to solving nonequilibrium statistical mechanics dynamical problems, Maximum Caliber has some advantages: (1) It is partition-function-based, so we can draw insights from similarities to equilibrium statistical mechanics. (2) It is trajectory-based, so it gives more dynamical information than population-based approaches like master equations; this is particularly important for few-particle and single-molecule systems. (3) It gives an unambiguous way to relate flows to forces, which has traditionally posed challenges. (4) Like Maximum Entropy, it may be useful for data analysis, specifically for time-dependent phenomena. PMID:18500851

  14. Maximum Caliber: A variational approach applied to two-state dynamics

    NASA Astrophysics Data System (ADS)

    Stock, Gerhard; Ghosh, Kingshuk; Dill, Ken A.

    2008-05-01

    We show how to apply a general theoretical approach to nonequilibrium statistical mechanics, called Maximum Caliber, originally suggested by E. T. Jaynes [Annu. Rev. Phys. Chem. 31, 579 (1980)], to a problem of two-state dynamics. Maximum Caliber is a variational principle for dynamics in the same spirit that Maximum Entropy is a variational principle for equilibrium statistical mechanics. The central idea is to compute a dynamical partition function, a sum of weights over all microscopic paths, rather than over microstates. We illustrate the method on the simple problem of two-state dynamics, A ↔B, first for a single particle, then for M particles. Maximum Caliber gives a unified framework for deriving all the relevant dynamical properties, including the microtrajectories and all the moments of the time-dependent probability density. While it can readily be used to derive the traditional master equation and the Langevin results, it goes beyond them in also giving trajectory information. For example, we derive the Langevin noise distribution rather than assuming it. As a general approach to solving nonequilibrium statistical mechanics dynamical problems, Maximum Caliber has some advantages: (1) It is partition-function-based, so we can draw insights from similarities to equilibrium statistical mechanics. (2) It is trajectory-based, so it gives more dynamical information than population-based approaches like master equations; this is particularly important for few-particle and single-molecule systems. (3) It gives an unambiguous way to relate flows to forces, which has traditionally posed challenges. (4) Like Maximum Entropy, it may be useful for data analysis, specifically for time-dependent phenomena.

  15. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr

    2012-11-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.

  16. A biarc-based shape optimization approach to reduce stress concentration effects

    NASA Astrophysics Data System (ADS)

    Meng, Liang; Zhang, Wei-Hong; Zhu, Ji-Hong; Xia, Liang

    2014-06-01

    In order to avoid stress concentration, the shape boundary must be properly designed via shape optimization. Traditional shape optimization approach eliminates the stress concentration effect by using free-form curve to present the design boundaries without taking the machinability into consideration. In most numerical control (NC) machines, linear as well as circular interpolations are used to generate the tool path. Non-circular curves, such as nonuniform rotational B-spline (NURBS), need other more advanced interpolation functions to formulate the tool path. Forming the circular tool path by approximating the optimal free curve boundary with arcs or biarcs is another option. However, these two approaches are both at a cost of sharp expansion of program code and long machining time consequently. Motivated by the success of recent researches on biarcs, a reliable shape optimization approach is proposed in this work to directly optimize the shape boundaries with biarcs while the efficiency and precision of traditional method are preserved. Finally, the approach is validated by several illustrative examples.

  17. A new optomechanical structural optimization approach: coupling FEA and raytracing sensitivity matrices

    NASA Astrophysics Data System (ADS)

    Riva, M.

    2012-09-01

    The design of astronomical instrument is growing in dimension and complexity following ELT class telescopes. The availability of new structural material like composite ones is asking for more robust and reliable designing numerical tools. This paper wants to show a new opto-mechanical optimization approach developed starting from a previously developed integrated design framework. The Idea is to reduce number of iteration in a multi- variable structural optimization taking advantage of the embedded sensitivity routines that are available both in FEA software and in raytracing ones. This approach provide reduced iteration number mainly in case of high number of structural variable parameters.

  18. Optimal design of experiments applied to headspace solid phase microextraction for the quantification of vicinal diketones in beer through gas chromatography-mass spectrometric detection.

    PubMed

    Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C

    2015-08-01

    Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1). PMID:26320791

  19. Estimating Soil Thermal Properties from Land Surface Temperature Measurements Using Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Madadgar, S.; Bateni, S.

    2012-12-01

    Soil thermal conductivity and volumetric heat capacity are crucial parameters in land surface hydrology and hydro-climatology. There are several techniques (e.g., heat-source probe, borehole relaxation, and heat-dissipation sensors) for in situ measurement of soil thermal properties. These methods are generally expensive and labor-intensive. In a departure with these in situ approaches, regression-based techniques have been developed to estimate soil thermal properties. They require several input variables such as soil texture, water content, organic content, etc, which are typically unavailable. To overcome the aforementioned drawbacks of these methods, a new approach is developed to estimate soil thermal properties from the sequences of land surface temperature (LST) measurements. Herein, LST measurements are the only required input to estimate soil thermal properties. An objective function describing the misfit between simulated LST from the heat diffusion equation and the corresponding observations is minimized using Ant Colony Optimization (ACO) technique in order to find the optimum values for soil thermal properties. The performance of model is initially tested on a single-layer (homogeneous) soil setup and then a generalized scheme of the multi-layer soil column is explored with two, five and ten of equal thickness sub-layers to account for inhomogeneity in the soil slab. The developed model is applied to the First International Satellite Land Surface Climatology (ISLSCP) Field Experiment in summer of 1987 and 1988. The retrieved soil thermal properties from ACO are used to solve the heat diffusion equation and estimate soil temperature within the soil slab. The soil temperature estimates show relatively good agreement with observations, suggesting that the proposed technique can reliably estimate soil thermal properties.

  20. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  1. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  2. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite.

    PubMed

    Mbamba, Christian Kazadi; Batstone, Damien J; Flores-Alsina, Xavier; Tait, Stephan

    2015-01-01

    Process simulation models used across the wastewater industry have inherent limitations due to over-simplistic descriptions of important physico–chemical reactions, especially for mineral solids precipitation. As part of the efforts towards a larger Generalized Physicochemical Modelling Framework, the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define the baseline model approach. Constant Composition Method (CCM) experiments are then used to examine influence of environmental factors on the baseline approach. Results show that the baseline model should include precipitation kinetics (not be quasi-equilibrium), should include a 1st order effect of the mineral particulate state (Xcryst) and, for calcite, have a 2nd order dependency (exponent n = 2.05 ± 0.29) on thermodynamic supersaturation (σ). Parameter analysis indicated that the model was more tolerant to a fast kinetic coefficient (kcryst) and so, in general, it is recommended that a large kcryst value be nominally selected where insufficient process data is available. Zero seed (self nucleating) conditions were effectively represented by including arbitrarily small amounts of mineral phase in the initial conditions. Both of these aspects are important for wastewater modelling, where knowledge of kinetic coefficients is usually not available, and it is typically uncertain which precipitates are actually present. The CCM experiments confirmed the baseline model, particularly the dependency on supersaturation. Temperature was also identified as an influential factor that should be corrected for via an Arrhenius-style correction of kcryst. The influence of magnesium (a common and representative added impurity) on kcryst was found to be significant but was considered

  3. JAVA implemented MSE optimal bit-rate allocation applied to 3-D hyperspectral imagery using JPEG2000 compression

    NASA Astrophysics Data System (ADS)

    Melchor, J. L., Jr.; Cabrera, S. D.; Aguirre, A.; Kosheleva, O. M.; Vidal, E., Jr.

    2005-08-01

    This paper describes an efficient algorithm and its Java implementation for a recently developed mean-squared error (MSE) rate-distortion optimal (RDO) inter-slice bit-rate allocation (BRA) scheme applicable to the JPEG2000 Part 2 (J2KP2) framework. Its performance is illustrated on hyperspectral imagery data using the J2KP2 with the Karhunen- Loeve transform (KLT) for decorrelation. The results are contrasted with those obtained using the traditional logvariance based BRA method and with the original RDO algorithm. The implementation has been developed as a Java plug-in to be incorporated into our evolving multi-dimensional data compression software tool denoted CompressMD. The RDO approach to BRA uses discrete rate distortion curves (RDCs) for each slice of transform coefficients. The generation of each point on a RDC requires a full decompression of that slice, therefore, the efficient version minimizes the number of RDC points needed from each slice by using a localized coarse-to-fine approach denoted RDOEfficient. The scheme is illustrated in detail using a subset of 10 bands of hyperspectral imagery data and is contrasted to the original RDO implementation and the traditional (log-variance) method of BRA showing that better results are obtained with the RDO methods. The three schemes are also tested on two hyperspectral imagery data sets with all bands present: the Cuprite radiance data from AVIRIS and a set derived from the Hyperion satellite. The results from the RDO and RDOEfficient are very close to each other in the MSE sense indicating that the adaptive approach can find almost the same BRA solution. Surprisingly, the traditional method also performs very close to the RDO methods, indicating that it is very close to being optimal for these types of data sets.

  4. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    NASA Astrophysics Data System (ADS)

    Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.

    2010-06-01

    A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test) which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel) and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD) of the full fields in order to drastically reduce their dimensionality. POD is

  5. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension

    PubMed Central

    Pesin, Yakov B.; Niu, Xun; Latash, Mark L.

    2010-01-01

    We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms. PMID:19902213

  6. A general sequential Monte Carlo method based optimal wavelet filter: A Bayesian approach for extracting bearing fault features

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Sun, Shilong; Tse, Peter W.

    2015-02-01

    A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

  7. PhysioSoft--an approach in applying computer technology in biofeedback procedures.

    PubMed

    Havelka, Mladen; Havelka, Juraj; Delimar, Marko

    2009-09-01

    The paper presents description of original biofeedback computer program called PhysioSoft. It has been designed on the basis of the experience in development of biofeedback techniques of interdisciplinary team of experts of the Department of Health Psychology of the University of Applied Health Studies, Faculty of Electrical Engineering and Computing, University of Zagreb, and "Mens Sana", Private Biofeedback Practice in Zagreb. The interest in the possibility of producing direct and voluntary effects on autonomic body functions has gradually proportionately increased with the dynamics of abandoning the Cartesian model of body-mind relationship. The psychosomatic approach and studies carried out in the 50-ies of the 20th century, together with the research about conditioned and operant learning, have proved close inter-dependence between the physical and mental, and also the possibility of training the individual to consciously act on his autonomic physiological functions. The new knowledge has resulted in the development of biofeedback techniques around the 70-ies of the previous century and has been the base of many studies indicating the significance of biofeedback techniques in clinical practice concerned with many symptoms of health disorders. The digitalization of biofeedback instruments and development of user friendly computer software enable the use of biofeedback at individual level as an efficient procedure of a patient's active approach to self care of his own health. As the new user friendly computer software enables extensive accessibility of biofeedback instruments, the authors have designed the PhysioSoft computer program as a contribution to the development and broad use of biofeedback. PMID:19860110

  8. Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach

    PubMed Central

    Li, Kang; Wu, Xiaodong; Chen, Danny Z.; Sonka, Milan

    2008-01-01

    Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s-t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. PMID:16402624

  9. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  10. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  11. Sequential Model-Based Parameter Optimization: an Experimental Investigation of Automated and Interactive Approaches

    NASA Astrophysics Data System (ADS)

    Hutter, Frank; Bartz-Beielstein, Thomas; Hoos, Holger H.; Leyton-Brown, Kevin; Murphy, Kevin P.

    This work experimentally investigates model-based approaches for optimizing the performance of parameterized randomized algorithms. Such approaches build a response surface model and use this model for finding good parameter settings of the given algorithm. We evaluated two methods from the literature that are based on Gaussian process models: sequential parameter optimization (SPO) (Bartz-Beielstein et al. 2005) and sequential Kriging optimization (SKO) (Huang et al. 2006). SPO performed better "out-of-the-box," whereas SKO was competitive when response values were log transformed. We then investigated key design decisions within the SPO paradigm, characterizing the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and a log-transformed objective function. In a domain for which performance results for other (modelfree) parameter optimization approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance. Finally, we compare this automated parameter tuning approach to an interactive, manual process that makes use of classical

  12. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  13. A SAND approach based on cellular computation models for analysis and optimization

    NASA Astrophysics Data System (ADS)

    Canyurt, O. E.; Hajela, P.

    2007-06-01

    Genetic algorithms (GAs) have received considerable recent attention in problems of design optimization. The mechanics of population-based search in GAs are highly amenable to implementation on parallel computers. The present article describes a fine-grained model of parallel GA implementation that derives from a cellular-automata-like computation. The central idea behind the cellular genetic algorithm (CGA) approach is to treat the GA population as being distributed over a 2-D grid of cells, with each member of the population occupying a particular cell and defining the state of that cell. Evolution of the cell state is tantamount to updating the design information contained in a cell site and, as in cellular automata computations, takes place on the basis of local interaction with neighbouring cells. A special focus of the article is in the use of cellular automata (CA)-based models for structural analysis in conjunction with the CGA approach to optimization. In such an approach, the analysis and optimization are evolved simultaneously in a unified cellular computational framework. The article describes the implementation of this approach and examines its efficiency in the context of representative structural optimization problems.

  14. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  15. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  16. An approach for optimal allocation of safety resources: using the knapsack problem to take aggregated cost-efficient preventive measures.

    PubMed

    Reniers, Genserik L L; Sörensen, Kenneth

    2013-11-01

    On the basis of the combination of the well-known knapsack problem and a widely used risk management technique in organizations (that is, the risk matrix), an approach was developed to carry out a cost-benefits analysis to efficiently take prevention investment decisions. Using the knapsack problem as a model and combining it with a well-known technique to solve this problem, bundles of prevention measures are prioritized based on their costs and benefits within a predefined prevention budget. Those bundles showing the highest efficiencies, and within a given budget, are identified from a wide variety of possible alternatives. Hence, the approach allows for an optimal allocation of safety resources, does not require any highly specialized information, and can therefore easily be applied by any organization using the risk matrix as a risk ranking tool. PMID:23551066

  17. Hybrid particle swarm optimization and tabu search approach for selecting genes for tumor classification using gene expression data.

    PubMed

    Shen, Qi; Shi, Wei-Min; Kong, Wei

    2008-02-01

    Gene expression data are characterized by thousands even tens of thousands of measured genes on only a few tissue samples. This can lead either to possible overfitting and dimensional curse or even to a complete failure in analysis of microarray data. Gene selection is an important component for gene expression-based tumor classification systems. In this paper, we develop a hybrid particle swarm optimization (PSO) and tabu search (HPSOTS) approach for gene selection for tumor classification. The incorporation of tabu search (TS) as a local improvement procedure enables the algorithm HPSOTS to overleap local optima and show satisfactory performance. The proposed approach is applied to three different microarray data sets. Moreover, we compare the performance of HPSOTS on these datasets to that of stepwise selection, the pure TS and PSO algorithm. It has been demonstrated that the HPSOTS is a useful tool for gene selection and mining high dimension data. PMID:18093877

  18. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    NASA Technical Reports Server (NTRS)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  19. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    NASA Astrophysics Data System (ADS)

    Vlah, Zvonimir; Seljak, Uroš; Okumura, Teppei; Desjacques, Vincent

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ~ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  20. Applying the WHO strategic approach to strengthening first and second trimester abortion services in Mongolia.

    PubMed

    Tsogt, Bazarragchaa; Seded, Khishgee; Johnson, Brooke R

    2008-05-01

    Abortion was made legal on request in Mongolia in 1989, following the collapse of the socialist regime, and later bound by a range of regulations. Concerned about the high number of abortions and inadequate quality of care in abortion services, the Ministry of Health applied the World Health Organization's Strategic Approach to issues related to abortion and contraception in 2003. The aim was to develop policies and programmes to reduce unintended pregnancies, mitigate complications from unsafe abortion, and improve the quality of abortion and contraception services for all socio-economic groups, including adolescents. This paper describes the changes that arose from a strategic assessment, highlighting the introduction of mifepristone-misoprostol for second trimester abortion. The aim was to replace mini-caesarean section and intra-uterine injection of Rivanol (ethacridine lactate), so that second trimester abortions could take place earlier than at 20 weeks gestation. National standards and guidelines for comprehensive abortion care were developed, the national pre-service training curriculum was harmonized with the new guidelines, at least one-third of the country's obstetrician-gynaecologists were trained in manual vacuum aspiration and medical abortion, and three model comprehensive abortion care units were established to provide high quality services to women, high quality training for providers and serve as nodes for further scaling up. PMID:18772093

  1. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Okumura, Teppei; Desjacques, Vincent E-mail: seljak@physik.uzh.ch E-mail: Vincent.Desjacques@unige.ch

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  2. Optimal design of sewer networks using cellular automata-based hybrid methods: Discrete and continuous approaches

    NASA Astrophysics Data System (ADS)

    Afshar, M. H.; Rohani, M.

    2012-01-01

    In this article, cellular automata based hybrid methods are proposed for the optimal design of sewer networks and their performance is compared with some of the common heuristic search methods. The problem of optimal design of sewer networks is first decomposed into two sub-optimization problems which are solved iteratively in a two stage manner. In the first stage, the pipe diameters of the network are assumed fixed and the nodal cover depths of the network are determined by solving a nonlinear sub-optimization problem. A cellular automata (CA) method is used for the solution of the optimization problem with the network nodes considered as the cells and their cover depths as the cell states. In the second stage, the nodal cover depths calculated from the first stage are fixed and the pipe diameters are calculated by solving a second nonlinear sub-optimization problem. Once again a CA method is used to solve the optimization problem of the second stage with the pipes considered as the CA cells and their corresponding diameters as the cell states. Two different updating rules are derived and used for the CA of the second stage depending on the treatment of the pipe diameters. In the continuous approach, the pipe diameters are considered as continuous variables and the corresponding updating rule is derived mathematically from the original objective function of the problem. In the discrete approach, however, an adhoc updating rule is derived and used taking into account the discrete nature of the pipe diameters. The proposed methods are used to optimally solve two sewer network problems and the results are presented and compared with those obtained by other methods. The results show that the proposed CA based hybrid methods are more efficient and effective than the most powerful search methods considered in this work.

  3. Responsive Leadership in Social Services: A Practical Approach for Optimizing Engagement and Performance.

    PubMed

    Lewis, Sarah

    2016-03-01

    Responsive Leadership in Social Services: A Practical Approach for Optimizing Engagement and Performance emphasizes the importance of effective supervision as a key component of quality leadership. The Responsive Leadership Approach considers employee needs, values, goals, and strengths to optimize worker performance. It is posited that when leaders integrate and operationalize the meaning embedded in the "employee story," they improve employee engagement and work performance as well as advance their own leadership ability. Discovery tools such as the Key Performance Motivators Scale, Preferred Leadership Profile, and Strengths Index are provided. The impact of operationalizing important values and using a strengths-based approach on organizational climate and employee morale is explored. Active listening and empathic response are discussed as practical methods to discover employee meaning. Techniques for dealing with "difficult" employees and undesirable attitudes and behaviors are described. This book is a valuable resource for developing the leadership capacities of first-time and experienced health and social services supervisors. PMID:26724310

  4. Comparison of optimization-based approaches to imaging spectroscopic inversion in coastal waters

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony M.; Mishonov, Andrey

    2005-06-01

    The United States Navy has recently shifted focus from open-ocean warfare to joint operations in optically complex nearshore regions. Accurately estimating bathymetry and water column inherent optical properties (IOPs) from passive remotely sensed imagery can be an important facilitator of naval operations. Lee et al. developed a semianalytical model that describes the relationship between shallow-water bottom depth, IOPs and subsurface and above-surface reflectance. They also developed a nonlinear optimization-based technique that estimates bottom depth and IOPs, using only measured spectral remote sensing reflectance as input. While quite effective, inversion using noisy field data can limit its accuracy. In this research, the nonlinear optimization-based Lee et al. inversion algorithm was used as a baseline method, and it provided the framework for a proposed hybrid evolutionary/classical optimization approach to hyperspectral data processing. All aspects of the proposed implementation were held constant with that of Lee et al., except that a hybrid evolutionary/classical optimizer (HECO) was substituted for the nonlinear method. HECO required more computer-processing time. In addition, HECO is nondeterministic, and the termination strategy is heuristic. However, the HECO method makes no assumptions regarding the mathematical form of the problem functions. Also, whereas smooth nonlinear optimization is only guaranteed to find a locally optimal solution, HECO has a higher probability of finding a more globally optimal result. While the HECO-acquired results are not provably optimal, we have empirically found that for certain variables, HECO does provide estimates comparable to nonlinear optimization (e.g., bottom albedo at 550 nm).

  5. A Bayesian approach to optimal sensor placement for structural health monitoring with application to active sensing

    NASA Astrophysics Data System (ADS)

    Flynn, Eric B.; Todd, Michael D.

    2010-05-01

    This paper introduces a novel approach for optimal sensor and/or actuator placement for structural health monitoring (SHM) applications. Starting from a general formulation of Bayes risk, we derive a global optimality criterion within a detection theory framework. The optimal configuration is then established as the one that minimizes the expected total presence of either type I or type II error during the damage detection process. While the approach is suitable for many sensing/actuation SHM processes, we focus on the example of active sensing using guided ultrasonic waves by implementing an appropriate statistical model of the wave propagation and feature extraction process. This example implements both pulse-echo and pitch-catch actuation schemes and takes into account line-of-site visibility and non-uniform damage probabilities over the monitored structure. The optimization space is searched using a genetic algorithm with a time-varying mutation rate. We provide three actuator/sensor placement test problems and discuss the optimal solutions generated by the algorithm.

  6. A new systems approach to optimizing investments in gas production and distribution

    SciTech Connect

    Dougherty, E.L.

    1983-03-01

    This paper presents a new analytical approach for determining the optimal sequence of investments to make in each year of an extended planning horizon in each of a group of reservoirs producing gas and gas liquids through an interconnected trunkline network and a gas processing plant. The optimality criterion is to maximize net present value while satisfying fixed offtake requirements for dry gas, but with no limits on gas liquids production. The planning problem is broken into n + 2 separate but interrelated subproblems; gas reservoir development and production, gas flow in a trunkline gathering system, and plant separation activities to remove undesirable gas (CO/sub 2/) or to recover valuable liquid components. The optimal solution for each subproblem depends upon the optimal solutions for all of the other subproblems, so that the overall optimal solution is obtained iteratively. The iteration technique used is based upon a combination of heuristics and the decompostion algorithm of mathematical programming. Each subproblem is solved once during each overall iteration. In addition to presenting some mathematical details of the solution approach, this paper describes a computer system which has been developed to obtain solutions.

  7. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  8. A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments

    PubMed Central

    Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.

    2009-01-01

    Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132

  9. Hybrid Sequencing Approach Applied to Human Fecal Metagenomic Clone Libraries Revealed Clones with Potential Biotechnological Applications

    PubMed Central

    Džunková, Mária; D’Auria, Giuseppe; Pérez-Villarroya, David; Moya, Andrés

    2012-01-01

    Natural environments represent an incredible source of microbial genetic diversity. Discovery of novel biomolecules involves biotechnological methods that often require the design and implementation of biochemical assays to screen clone libraries. However, when an assay is applied to thousands of clones, one may eventually end up with very few positive clones which, in most of the cases, have to be “domesticated” for downstream characterization and application, and this makes screening both laborious and expensive. The negative clones, which are not considered by the selected assay, may also have biotechnological potential; however, unfortunately they would remain unexplored. Knowledge of the clone sequences provides important clues about potential biotechnological application of the clones in the library; however, the sequencing of clones one-by-one would be very time-consuming and expensive. In this study, we characterized the first metagenomic clone library from the feces of a healthy human volunteer, using a method based on 454 pyrosequencing coupled with a clone-by-clone Sanger end-sequencing. Instead of whole individual clone sequencing, we sequenced 358 clones in a pool. The medium-large insert (7–15 kb) cloning strategy allowed us to assemble these clones correctly, and to assign the clone ends to maintain the link between the position of a living clone in the library and the annotated contig from the 454 assembly. Finally, we found several open reading frames (ORFs) with previously described potential medical application. The proposed approach allows planning ad-hoc biochemical assays for the clones of interest, and the appropriate sub-cloning strategy for gene expression in suitable vectors/hosts. PMID:23082187

  10. A new approach for investigating venom function applied to venom calreticulin in a parasitoid wasp.

    PubMed

    Siebert, Aisha L; Wheeler, David; Werren, John H

    2015-12-01

    A new method is developed to investigate functions of venom components, using venom gene RNA interference knockdown in the venomous animal coupled with RNA sequencing in the envenomated host animal. The vRNAi/eRNA-Seq approach is applied to the venom calreticulin component (v-crc) of the parasitoid wasp Nasonia vitripennis. Parasitoids are common, venomous animals that inject venom proteins into host insects, where they modulate physiology and metabolism to produce a better food resource for the parasitoid larvae. vRNAi/eRNA-Seq indicates that v-crc acts to suppress expression of innate immune cell response, enhance expression of clotting genes in the host, and up-regulate cuticle genes. V-crc KD also results in an increased melanization reaction immediately following envenomation. We propose that v-crc inhibits innate immune response to parasitoid venom and reduces host bleeding during adult and larval parasitoid feeding. Experiments do not support the hypothesis that v-crc is required for the developmental arrest phenotype observed in envenomated hosts. We propose that an important role for some venom components is to reduce (modulate) the exaggerated effects of other venom components on target host gene expression, physiology, and survival, and term this venom mitigation. A model is developed that uses vRNAi/eRNA-Seq to quantify the contribution of individual venom components to total venom phenotypes, and to define different categories of mitigation by individual venoms on host gene expression. Mitigating functions likely contribute to the diversity of venom proteins in parasitoids and other venomous organisms. PMID:26359852

  11. Heat treatment optimization of alumina/aluminum metal matrix composites using the Taguchi approach

    SciTech Connect

    Saigal, A.; Leisk, G. )

    1992-03-01

    The paper describes the use of the Taguchi approach for optimizing the heat treatment process of alumina-reinforced Al-6061 metal-matrix composites (MMCs). It is shown that the use of the Taguchi method makes it possible to test a great number of factors simultaneously and to provide a statistical data base that can be used for sensitivity and optimization studies. The results of plotting S/N values versus vol pct, solutionizing time, aging time, and aging temperature showed that the solutionizing time and the aging temperature significantly affect both the yield and the ultimate tensile strength of alumina/Al MMCs. 11 refs.

  12. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  13. Approach for optimization of the color rendering index of light mixtures.

    PubMed

    Lin, Ku Chin

    2010-07-01

    The general CIE color rendering index (CRI) of light is an important index to evaluate the quality of illumination. However, because of the complexity in measurement of the rendering ability under designated constraints, an approach for general mathematical formulation and global optimization of the rendering ability of light emitting diode (LED) light mixtures is difficult to develop. This study is mainly devoted to developing mathematical formulation and a numerical method for the CRI optimization. The method is developed based on the so-called complex method [Computer J.8, 42 (1965); G. V. Reklaitis et al., Engineering Optimization-Methods and Applications (Wiley, 1983)] with modifications. It is first applicable to 3-color light mixtures and then extended to a hierarchical and iterative structure for higher-order light mixtures. The optimization is studied under the constraints of bounded relative intensities of the light mixture, designated correlated color temperature (CCT), and the required approximate white of the light mixture. The problems of inconsistent constraints and solutions are addressed. The CRI is a complicated function of the relative intensities of the compound illuminators of the mixture. The proposed method requires taking no derivatives of the function and is very adequate for the optimization. This is demonstrated by simulation for RGBW LED light mixtures. The results show that global and unique convergence to the optimal within required tolerances for CRI and spatial dispersivity is always achieved. PMID:20596135

  14. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    NASA Astrophysics Data System (ADS)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  15. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  16. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-01

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  17. An exact approach to direct aperture optimization in IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Romeijn, H. Edwin; Caner Taşkın, Z.; Dempsey, James F.

    2007-12-01

    We consider the problem of intensity-modulated radiation therapy (IMRT) treatment planning using direct aperture optimization. While this problem has been relatively well studied in recent years, most approaches employ a heuristic approach to the generation of apertures. In contrast, we use an exact approach that explicitly formulates the fluence map optimization (FMO) problem as a convex optimization problem in terms of all multileaf collimator (MLC) deliverable apertures and their associated intensities. However, the number of deliverable apertures, and therefore the number of decision variables and constraints in the new problem formulation, is typically enormous. To overcome this, we use an iterative approach that employs a subproblem whose optimal solution either provides a suitable aperture to add to a given pool of allowable apertures or concludes that the current solution is optimal. We are able to handle standard consecutiveness, interdigitation and connectedness constraints that may be imposed by the particular MLC system used, as well as jaws-only delivery. Our approach has the additional advantage that it can explicitly account for transmission of dose through the part of an aperture that is blocked by the MLC system, yielding a more precise assessment of the treatment plan than what is possible using a traditional beamlet-based FMO problem. Finally, we develop and test two stopping rules that can be used to identify treatment plans of high clinical quality that are deliverable very efficiently. Tests on clinical head-and-neck cancer cases showed the efficacy of our approach, yielding treatment plans comparable in quality to plans obtained by the traditional method with a reduction of more than 75% in the number of apertures and a reduction of more than 50% in beam-on time, with only a modest increase in computational effort. The results also show that delivery efficiency is very insensitive to the addition of traditional MLC constraints; however, jaws

  18. A Simulation/Optimization approach to manage groundwater resources in the Gaza aquifer (Palestinian Territories) under climate change conditions

    NASA Astrophysics Data System (ADS)

    Dentoni, Marta; Qahman, Khalid; Deidda, Roberto; Paniconi, Claudio; Lecca, Giuditta

    2013-04-01

    The Gaza aquifer is the main source of water for agricultural, domestic, and industrial uses in the Gaza Strip. The rapid increase on water demand due to continuous population growth has led to water scarcity and contamination by seawater intrusion (SWI). Furthermore, current projections of future climatic conditions (IPCC, 2007) point to potential decreases in available water, both inflows and outflows. A numerical assessment of SWI in the Gaza coastal aquifer under climate induced changes has been carried out by means of the CODESA-3D model of density-dependent variably saturated flow and salt transport in groundwaters. After integrating available data on climatology, geology, geomorphology, hydrology, hydrogeology, soil use, and groundwater exploitation relative to the period 1935-2010, the calibrated and validated model was used to simulate the response of the hydrological basin to actual and future scenarios of climate change obtained from different regional circulation models. The results clearly show that, if current pumping rates are maintained, seawater intrusion will worsen. To manage sustainable aquifer development under effective recharge operations and water quality constraints, a decision support system based on a simulation/optimization (S/O) approach was applied to the Gaza study site. The S/O approach is based on coupling the CODESA-3D model with the Carroll's Genetic Algorithm Driver. The optimization model incorporates two conflicting objectives using a penalty method: maximizing pumping rates from the aquifer wells while limiting the salinity of the water withdrawn. The resulting coastal aquifer management model was applied over a 30-year time period to identify the optimum spatial distribution of pumping rates at the control wells. The optimized solution provides for a general increase in water table levels and a decrease in the total extracted salt mass while keeping total abstraction rates relatively constant, with reference to non-optimized

  19. Nonsmooth optimization approaches to VDA of models with on/off parameterizations: Theoretical issues

    NASA Astrophysics Data System (ADS)

    Jiang, Zhu; Kamachi, Masafumi; Guangqing, Zhou

    2002-05-01

    Some variational data assimilation problems of time-and space-discrete models with on/ off parameterizations can be regarded as nonsmooth optimization problems. Some theoretical issues related to those problems is systematically addressed. One of the basic concept in nonsmooth optimization is subgradient, a generalized notation of a gradient of the cost function. First it is shown that the concept of subgradient leads to a clear definition of the adjoint variables in the conventional adjoint model at singular points caused by on/ off switches. Using an illustrated example of a multi-layer diffusion model with the convective adjustment, it is proved that the solution of the conventional adjoint model can not be inter-preted as Gateaux derivatives or directional derivatives, at singular points, but can be interpreted as a subgradient of the cost function. Two existing smooth optimization approaches are then reviewed which are used in current data assimi-lation practice. The first approach is the conventional adjoint model plus smooth optimization algorithms. Some conditions under which the approach can converge to the minimal are discussed. Another approach is smoothing and regularization approach, which removes some thresholds in physical parameterizations. Two nonsmooth optimization approaches are also reviewed. One is the subgradient method, which uses the conventional adjoint model. The method is convergent, but very slow. Another approach, the bundle methods are more efficient. The main idea of the bundle method is to use the minimal norm vector of subdifferential, which is the convex hull of all subgradients, as the descent director. However finding all subgradients is very difficult in general. Therefore bundle methods are modified to use only one subgradient that can be calculated by the conventional adjoint model. In order to develop an efficient bundle method, a set-valued adjoint model, as a generalization of the conventional adjoint model, is proposed. It

  20. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. PMID:22789478

  1. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  2. A holistic approach towards optimal planning of hybrid renewable energy systems: Combining hydroelectric and wind energy

    NASA Astrophysics Data System (ADS)

    Dimas, Panagiotis; Bouziotas, Dimitris; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2014-05-01

    Hydropower with pumped storage is a proven technology with very high efficiency that offers a unique large-scale energy buffer. Energy storage is employed by pumping water upstream to take advantage of the excess of produced energy (e.g. during night) and next retrieving this water to generate hydro-power during demand peaks. Excess energy occurs due to other renewables (wind, solar) whose power fluctuates in an uncontrollable manner. By integrating these with hydroelectric plants with pumped storage facilities we can form autonomous hybrid renewable energy systems. The optimal planning and management thereof requires a holistic approach, where uncertainty is properly represented. In this context, a novel framework is proposed, based on stochastic simulation and optimization. This is tested in an existing hydrosystem of Greece, considering its combined operation with a hypothetical wind power system, for which we seek the optimal design to ensure the most beneficial performance of the overall scheme.

  3. Towards an optimal multidisciplinary approach to breast cancer treatment for older women.

    PubMed

    Thavarajah, Nemica; Menjak, Ines; Trudeau, Maureen; Mehta, Rajin; Wright, Frances; Leahey, Angela; Ellis, Janet; Gallagher, Damian; Moore, Jennifer; Bristow, Bonnie; Kay, Noreen; Szumacher, Ewa

    2015-01-01

    The treatment of breast cancer presents specifc concerns that are unique to the needs of older female patients. While treatment of early breast cancer does not vary greatly with age, the optimal management of older women with breast cancer often requires complex interdisciplinary supportive care due to multiple comorbidities. This article reviews optimal approaches to breast cancer in women 65 years and older from an interdisciplinary perspective. A literature review was conducted using MEDLINE and EMBASE, choosing articles concentrated on the management of older breast cancer patients from the point of view of several disciplines, including geriatrics, radiation oncology, medical oncology, surgical oncology, psychooncology, palliative care, nursing, and social work. This patient population requires interprofessional collaboration from the time of diagnosis, throughout treatment and into the recovery period. Thus, we recommend an interdisciplinary program dedicated to the treat ment of older women with breast cancer to optimize their cancer care. PMID:26897863

  4. Ant Colony Optimization Approaches to Clustering of Lung Nodules from CT Images

    PubMed Central

    Gopalakrishnan, Ravichandran C.; Kuppusamy, Veerakumar

    2014-01-01

    Lung cancer is becoming a threat to mankind. Applying machine learning algorithms for detection and segmentation of irregular shaped lung nodules remains a remarkable milestone in CT scan image analysis research. In this paper, we apply ACO algorithm for lung nodule detection. We have compared the performance against three other algorithms, namely, Otsu algorithm, watershed algorithm, and global region based segmentation. In addition, we suggest a novel approach which involves variations of ACO, namely, refined ACO, logical ACO, and variant ACO. Variant ACO shows better reduction in false positives. In addition we propose black circular neighborhood approach to detect nodule centers from the edge detected image. Genetic algorithm based clustering is performed to cluster the nodules based on intensity, shape, and size. The performance of the overall approach is compared with hierarchical clustering to establish the improvisation in the proposed approach. PMID:25525455

  5. Ant colony optimization approaches to clustering of lung nodules from CT images.

    PubMed

    Gopalakrishnan, Ravichandran C; Kuppusamy, Veerakumar

    2014-01-01

    Lung cancer is becoming a threat to mankind. Applying machine learning algorithms for detection and segmentation of irregular shaped lung nodules remains a remarkable milestone in CT scan image analysis research. In this paper, we apply ACO algorithm for lung nodule detection. We have compared the performance against three other algorithms, namely, Otsu algorithm, watershed algorithm, and global region based segmentation. In addition, we suggest a novel approach which involves variations of ACO, namely, refined ACO, logical ACO, and variant ACO. Variant ACO shows better reduction in false positives. In addition we propose black circular neighborhood approach to detect nodule centers from the edge detected image. Genetic algorithm based clustering is performed to cluster the nodules based on intensity, shape, and size. The performance of the overall approach is compared with hierarchical clustering to establish the improvisation in the proposed approach. PMID:25525455

  6. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  7. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  8. ALOHA: a novel probability fusion approach for scoring multi-parameter drug-likeness during the lead optimization stage of drug discovery.

    PubMed

    Debe, Derek A; Mamidipaka, Ravindra B; Gregg, Robert J; Metz, James T; Gupta, Rishi R; Muchmore, Steven W

    2013-09-01

    Automated lead optimization helper application (ALOHA) is a novel fitness scoring approach for small molecule lead optimization. ALOHA employs a series of generalized Bayesian models trained from public and proprietary pharmacokinetic, absorption, distribution, metabolism, and excretion, and toxicology data to determine regions of chemical space that are likely to have excellent drug-like properties. The input to ALOHA is a list of molecules, and the output is a set of individual probabilities as well as an overall probability that each of the molecules will pass a panel of user selected assays. In addition to providing a summary of how and when to apply ALOHA, this paper will discuss the validation of ALOHA's Bayesian models and probability fusion approach. Most notably, ALOHA is demonstrated to discriminate between members of the same chemical series with strong statistical significance, suggesting that ALOHA can be used effectively to select compound candidates for synthesis and progression at the lead optimization stage of drug discovery. PMID:24113765

  9. Modeling, simulation and optimization approaches for design of lightweight car body structures

    NASA Astrophysics Data System (ADS)

    Kiani, Morteza

    Simulation-based design optimization and finite element method are used in this research to investigate weight reduction of car body structures made of metallic and composite materials under different design criteria. Besides crashworthiness in full frontal, offset frontal, and side impact scenarios, vibration frequencies, static stiffness, and joint rigidity are also considered. Energy absorption at the component level is used to study the effectiveness of carbon fiber reinforced polymer (CFRP) composite material with consideration of different failure criteria. A global-local design strategy is introduced and applied to multi-objective optimization of car body structures with CFRP components. Multiple example problems involving the analysis of full-vehicle crash and body-in-white models are used to examine the effect of material substitution and the choice of design criteria on weight reduction. The results of this study show that car body structures that are optimized for crashworthiness alone may not meet the vibration criterion. Moreover, optimized car body structures with CFRP components can be lighter with superior crashworthiness than the baseline and optimized metallic structures.

  10. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  11. A multidating approach applied to historical slackwater flood deposits of the Gardon River, SE France

    NASA Astrophysics Data System (ADS)

    Dezileau, L.; Terrier, B.; Berger, J. F.; Blanchemanche, P.; Latapie, A.; Freydier, R.; Bremond, L.; Paquier, A.; Lang, M.; Delgado, J. L.

    2014-06-01

    A multidating approach was carried out on slackwater flood deposits, preserved in valley side rock cave and terrace, of the Gardon River in Languedoc, southeast France. Lead-210, caesium-137, and geochemical analysis of mining-contaminated slackwater flood sediments have been used to reconstruct the history of these flood deposits. These age controls were combined with the continuous record of Gardon flow since 1890, and the combined records were then used to assign ages to slackwater deposits. The stratigraphic records of terrace GE and cave GG were excellent examples to illustrate the effects of erosion/preservation in a context of a progressively self-censoring, vertically accreting sequence. The sedimentary flood record of the terrace GE located at 10 m above the channel bed is complete for years post-1958 but incomplete before. During the 78-year period 1880-1958, 25 floods of a sufficient magnitude (> 1450 m3/s) have covered the terrace. Since 1958, however, the frequency of inundation of the deposits has been lower: only 5 or 6 floods in 52 years have been large enough to exceed the necessary threshold discharge (> 1700 m3/s). The progressive increase of the threshold discharge and the reduced frequency of inundation at the terrace could allow stabilization of the vegetation cover and improve protection against erosion from subsequent large magnitude flood events. The sedimentary flood record seems complete for cave GG located at 15 m above the channel bed. Here, the low frequency of events would have enabled a high degree of stabilization of the sedimentary flood record, rendering the deposits less susceptible to erosion. Radiocarbon dating is used in this study and compared to the other dating techniques. Eighty percent of radiocarbon dates on charcoals were considerably older than those obtained by the other techniques in the terrace. On the other hand, radiocarbon dating on seeds provided better results. This discrepancy between radiocarbon dates on

  12. A Computer-Assisted Approach for Conducting Information Technology Applied Instructions

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Hwang, Gwo-Jen; Tsai, Pei Jin; Yang, Tzu-Chi

    2009-01-01

    The growing popularity of computer and network technologies has attracted researchers to investigate the strategies and the effects of information technology applied instructions. Previous research has not only demonstrated the benefits of applying information technologies to the learning process, but has also revealed the difficulty of applying…

  13. An Optimal Control Approach for an Overall Cryogenic Plant Under Pulsed Heat Loads

    NASA Astrophysics Data System (ADS)

    Palaćın, Luis Gómez; Bradu, Benjamin; Viñuela, Enrique Blanco; Maekawa, Ryuji; Chalifour, Michel

    This work deals with the optimal management of a cryogenic plant composed by parallel refrigeration plants, which provide supercritical helium to pulsed heat loads. First, a data reconciliation approach is proposed to estimate precisely the refrigerator variables necessary to deduce the efficiency of each refrigerator. Second, taking into account these efficiencies, an optimal operation of the system is proposed and studied. Finally, while minimizing the power consumption of the refrigerators, the control system maintains stable operation of the cryoplant under pulsed heat loads. The management of the refrigerators is carried out by an upper control layer, which balances the relative production of cooling power in each refrigerator. In addition, this upper control layer deals with the mitigation of malfunctions and faults in the system. The proposed approach has been validated using a dynamic model of the cryoplant developed with EcosimPro software, based on first principles (mass and energy balances) and thermo-hydraulic equations.

  14. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness. PMID:23988713

  15. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times. PMID:27077046

  16. Surface Roughness Optimization of Polyamide-6/Nanoclay Nanocomposites Using Artificial Neural Network: Genetic Algorithm Approach

    PubMed Central

    Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  17. A computational approach for understanding immune response to multiple epitopes based on optimal control formulation

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaopeng; Yang, Ruoting; Zhang, Mingjun; Xia, Henian

    2010-12-01

    The immune system does not response in equal probability to every epitope of an invader. We investigate the immune system's decision making process using optimal control principles. Mathematically, this formulation requires the solution of a two-point boundary-value problem, which is a challenging task especially when the control variables are bounded. In this work, we develop a computational approach based on the shooting technique for bounded optimal control problems. We then utilize the computational approach to carry out extensive numerical studies on a simple immune response model of two competing controls. Numerical solutions demonstrate that the results of optimal control depend on the objective function, the limitations on control inputs, as well as the amounts of peptides. Moreover, the state space of peptides can be divided into different regions according the properties of the solutions. The developed algorithm not only provides a useful tool for understanding decision making strategies of the immune system but can also be utilized to solve other complex optimal control problems.

  18. Real-time PCR probe optimization using design of experiments approach

    PubMed Central

    Wadle, S.; Lehnert, M.; Rubenwolf, S.; Zengerle, R.; von Stetten, F.

    2015-01-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3–14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7–11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times. PMID:27077046

  19. Pectin extraction from quince (Cydonia oblonga) pomace applying alternative methods: effect of process variables and preliminary optimization.

    PubMed

    Brown, Valeria Anahí; Lozano, Jorge E; Genovese, Diego Bautista

    2014-03-01

    The objectives of this study were to introduce alternative methods in the process of pectin extraction from quince pomace, to determine the effect of selected process variables (factors) on the obtained pectin, and to perform a preliminary optimization of the process. A fractional factorial experimental design was applied, where the factors considered were six: quince pomace pretreatment (washing vs blanching), drying method (hot air vs LPSSD), acid extraction conditions (pH, temperature, and time), and pectin extract concentration method (vacuum evaporation vs ultrafiltration). The effects of these factors and their interactions on pectin yield (Y: 0.2-34.2 mg/g), GalA content (44.5-76.2%), and DM (47.5-90.9%), were determined. For these three responses, extraction pH was the main effect, but it was involved in two and three factors interactions. Regarding alternative methods, LPSSD was required for maximum Y and GalA, and ultrafiltration for maximum GalA and DM. Response models were used to predict optimum process conditions (quince blanching, pomace drying by LPSSD, acid extraction at pH 2.20, 80 , 3 h, and concentration under vacuum) to simultaneously maximize Y (25.2 mg/g), GalA (66.3%), and DM (66.4%). PMID:23733815

  20. Optimal contribution selection applied to the Norwegian and the North-Swedish cold-blooded trotter - a feasibility study.

    PubMed

    Olsen, H F; Meuwissen, T; Klemetsdal, G

    2013-06-01

    The aim of this study was to examine how to apply optimal contribution selection (OCS) in the Norwegian and the North-Swedish cold-blooded trotter and give practical recommendations for the future. OCS was implemented using the software Gencont with overlapping generations and selected a few, but young sires, as these turn over the generations faster and thus is less related to the mare candidates. In addition, a number of Swedish sires were selected as they were less related to the selection candidates. We concluded that implementing OCS is feasible to select sires (there is no selection on mares), and we recommend the number of available sire candidates to be continuously updated because of amongst others deaths and geldings. In addition, only considering sire candidates with phenotype above average within a year class would allow selection candidates from many year classes to be included and circumvent current limitation on number of selection candidates in Gencont (approx. 3000). The results showed that mare candidates can well be those being mated the previous year. OCS will, dynamically, recruit young stallions and manage the culling or renewal of annual breeding permits for stallions that had been previously approved. For the annual mating proportion per sire, a constraint in accordance with the maximum that a sire can mate naturally is recommended. PMID:23679942

  1. Transmission Expansion Planning - A Multiyear Dynamic Approach Using a Discrete Evolutionary Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Rocha, M. C.; Saraiva, J. T.

    2012-10-01

    The basic objective of Transmission Expansion Planning (TEP) is to schedule a number of transmission projects along an extended planning horizon minimizing the network construction and operational costs while satisfying the requirement of delivering power safely and reliably to load centres along the horizon. This principle is quite simple, but the complexity of the problem and the impact on society transforms TEP on a challenging issue. This paper describes a new approach to solve the dynamic TEP problem, based on an improved discrete integer version of the Evolutionary Particle Swarm Optimization (EPSO) meta-heuristic algorithm. The paper includes sections describing in detail the EPSO enhanced approach, the mathematical formulation of the TEP problem, including the objective function and the constraints, and a section devoted to the application of the developed approach to this problem. Finally, the use of the developed approach is illustrated using a case study based on the IEEE 24 bus 38 branch test system.

  2. Comparison of penalty functions on a penalty approach to mixed-integer optimization

    NASA Astrophysics Data System (ADS)

    Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2016-06-01

    In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.

  3. CNS Multiparameter Optimization Approach: Is it in Accordance with Occam's Razor Principle?

    PubMed

    Raevsky, Oleg A

    2016-04-01

    A detailed analysis of the possibility of using the Multiparameter Optimization approach (MPO) for CNS/non-CNS classification of drugs was carried out. This work has shown that MPO descriptors are able to describe only part of chemical transport in the CNS connected with transmembrane diffusion. Hence the "intuitive" CNS MPO approach with arbitrary selection of descriptors and calculations of score functions, search of thresholds of classification, and absence of any chemometric procedures, leads to rather modest accuracy of CNS/non-CNS classification models. PMID:27491918

  4. An optimal control approach to pilot/vehicle analysis and Neal-Smith criteria

    NASA Technical Reports Server (NTRS)

    Bacon, B. J.; Schmidt, D. K.

    1984-01-01

    The approach of Neal and Smith was merged with the advances in pilot modeling by means of optimal control techniques. While confirming the findings of Neal and Smith, a methodology that explicitly includes the pilot's objective in attitude tracking was developed. More importantly, the method yields the required system bandwidth along with a better pilot model directly applicable to closed-loop analysis of systems in any order.

  5. Nodal Fermi surface pocket approaching an optimal quantum critical point in YBCO

    NASA Astrophysics Data System (ADS)

    Sebastian, Suchitra; Tan, Beng; Lonzarich, Gilbert; Ramshaw, Brad; Harrison, Neil; Balakirev, Fedor; Mielke, Chuck; Sabok, S.; Dabrowski, B.; Liang, Ruixing; Bonn, Doug; Hardy, Walter

    2014-03-01

    I present new quantum oscillation measurements over the entire underdoped regime in YBa2Cu3O6+x and YBa2Cu4O8 using ultra-high magnetic fields to destroy superconductivity and access the normal ground state. A robust small nodal Fermi surface created by charge order is found to extend over the entire underdoped range, exhibiting quantum critical signatures approaching optimal doping.

  6. A Novel Synthesis of Computational Approaches Enables Optimization of Grasp Quality of Tendon-Driven Hands

    PubMed Central

    Inouye, Joshua M.; Kutch, Jason J.; Valero-Cuevas, Francisco J.

    2013-01-01

    We propose a complete methodology to find the full set of feasible grasp wrenches and the corresponding wrench-direction-independent grasp quality for a tendon-driven hand with arbitrary design parameters. Monte Carlo simulations on two representative designs combined with multiple linear regression identified the parameters with the greatest potential to increase this grasp metric. This synthesis of computational approaches now enables the systematic design, evaluation, and optimization of tendon-driven hands. PMID:23335864

  7. Nanocarriers for optimizing the balance between interfollicular permeation and follicular uptake of topically applied clobetasol to minimize adverse effects.

    PubMed

    Mathes, C; Melero, A; Conrad, P; Vogt, T; Rigo, L; Selzer, D; Prado, W A; De Rossi, C; Garrigues, T M; Hansen, S; Guterres, S S; Pohlmann, A R; Beck, R C R; Lehr, C-M; Schaefer, U F

    2016-02-10

    The treatment of various hair disorders has become a central focus of good dermatologic patient care as it affects men and women all over the world. For many inflammatory-based scalp diseases, glucocorticoids are an essential part of treatment, even though they are known to cause systemic as well as local adverse effects when applied topically. Therefore, efficient targeting and avoidance of these side effects are of utmost importance. Optimizing the balance between drug release, interfollicular permeation, and follicular uptake may allow minimizing these adverse events and simultaneously improve drug delivery, given that one succeeds in targeting a sustained release formulation to the hair follicle. To test this hypothesis, three types of polymeric nanocarriers (nanospheres, nanocapsules, lipid-core nanocapsules) for the potent glucocorticoid clobetasol propionate (CP) were prepared. They all exhibited a sustained release of drug, as was desired. The particles were formulated as a dispersion and hydrogel and (partially) labeled with Rhodamin B for quantification purposes. Follicular uptake was investigated using the Differential Stripping method and was found highest for nanocapsules in dispersion after application of massage. Moreover, the active ingredient (CP) as well as the nanocarrier (Rhodamin B labeled polymer) recovered in the hair follicle were measured simultaneously, revealing an equivalent uptake of both. In contrast, only negligible amounts of CP could be detected in the hair follicle when applied as free drug in solution or hydrogel, regardless of any massage. Skin permeation experiments using heat-separated human epidermis mounted in Franz Diffusion cells revealed equivalent reduced transdermal permeability for all nanocarriers in comparison to application of the free drug. Combining these results, nanocapsules formulated as an aqueous dispersion and applied by massage appeare to be a good candidate to maximize follicular targeting and minimize drug

  8. A new approach to trajectory optimization based on direct transcription and differential flatness

    NASA Astrophysics Data System (ADS)

    Poustini, Mohammad Javad; Esmaelzadeh, Reza; Adami, Amirhossein

    2015-02-01

    The objective of the present paper is to introduce a reliable method to produce an optimal trajectory in the presence of all limitations and constraints. Direct transcription, has been employed to convert the trajectory optimization problem into nonlinear programming problem via discretizing the profile of state and control parameters and solving for a constrained problem. Differential flatness as a complementary theory leads to model the optimization problem in a lowered dimensional space through defining flat variables. Several curvilinear functions have been used to approximate flat variables and have their own benefits and disadvantages. Accuracy, complexity and number of needed points are examples of related issues. A new approach is developed based on an indirect approximation of flat variables, which leads to decrease the optimization variables and computational costs while preserving the needed accuracy. The proposed method deals with a 3rd order approximation of flat variables via integrating linear function of the acceleration profile. The new method is implemented on the terminal area energy management phase of a reusable launch vehicle. Test results show that the suggested method, as compared with other conventional methods, requires lower computational efforts in cases of the number of iterations and function evaluations, while providing a more accurate optimal solution.

  9. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  10. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories.

    PubMed

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans

  11. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  12. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans

  13. A divide and conquer approach to determine the Pareto frontier for optimization of protein engineering experiments

    PubMed Central

    He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris

    2016-01-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081

  14. Model-free approach to optimal signal light timing for system-wide traffic control

    SciTech Connect

    Spall, J.C.; Chin, D.C.

    1994-12-31

    A long-standing problem in traffic engineering is to optimize the flow of vehicles through a given road network. Improving the timing of the traffic signals at intersections in the network is generally the most powerful and cost-effective means of achieving this goal. However, because of the many complex aspects of a traffic system-human behavioral considerations, vehicle flow interactions within the network, weather effects, traffic accidents, long-term (e.g., seasonal) variation, etc.-it has been notoriously difficult to determine the optimal signal light timing. This is especially the case on a system- wide (multiple intersection) basis. Much of this difficulty has stemmed from the need to build extremely complex open-loop models of the traffic dynamics as a component of the control strategy. This paper presents a fundamentally different approach for optimal light timing that eliminates the need for such an open-loop model. The approach is based on a neural network (or other function approximator) serving as the basis for the control law, with the weight estimation occurring in closed-loop mode via the simultaneous perturbation stochastic approximation (SPSA) algorithm. Since the SPSA algorithm requires only loss function measurements (no gradients of the loss function), there is no open-loop model required for the weight estimation. The approach is illustrated by simulation on a six-intersection network with moderate congestion and stochastic, nonlinear effects.

  15. A Pragmatic Approach to Applied Ethics in Sport and Related Physical Activity.

    ERIC Educational Resources Information Center

    Zeigler, Earle F.

    Arguing that there is still no single, noncontroversial foundation on which the world's present multi-structure of ethics can be built, this paper examines a scientific ethics approach. It is postulated that in North American culture, the approach to instruction in ethics for youth is haphazard at best. Society does not provide an adequate means…

  16. Administrative Technology and the School Executive: Applying the Systems Approach to Educational Administration.

    ERIC Educational Resources Information Center

    Knezevich, Stephen J., Ed.

    In this era of rapid social change, educational administrators have discovered that new approaches to problem solving and decision making are needed. Systems analysis could afford a promising approach to administrative problems by providing a number of systematic techniques designed to sharpen administrative decision making, enhance efficiency,…

  17. Developing and Applying Green Building Technology in an Indigenous Community: An Engaged Approach to Sustainability Education

    ERIC Educational Resources Information Center

    Riley, David R.; Thatcher, Corinne E.; Workman, Elizabeth A.

    2006-01-01

    Purpose: This paper aims to disseminate an innovative approach to sustainability education in construction-related fields in which teaching, research, and service are integrated to provide a unique learning experience for undergraduate students, faculty members, and community partners. Design/methodology/approach: The paper identifies the need for…

  18. Optimal shift duration and sequence: recommended approach for short-term emergency response activations for public health and emergency management.

    PubMed

    Burgess, Paula A

    2007-04-01

    Since September 11, 2001, and the consequent restructuring of the US preparedness and response activities, public health workers are increasingly called on to activate a temporary round-the-clock staffing schedule. These workers may have to make key decisions that could significantly impact the health and safety of the public. The unique physiological demands of rotational shift work and night shift work have the potential to negatively impact decisionmaking ability. A responsible, evidence-based approach to scheduling applies the principles of circadian physiology, as well as unique individual physiologies and preferences. Optimal scheduling would use a clockwise (morning-afternoon-night) rotational schedule: limiting night shifts to blocks of 3, limiting shift duration to 8 hours, and allowing 3 days of recuperation after night shifts. PMID:17413074

  19. Optimal Shift Duration and Sequence: Recommended Approach for Short-Term Emergency Response Activations for Public Health and Emergency Management

    PubMed Central

    Burgess, Paula A.

    2007-01-01

    Since September 11, 2001, and the consequent restructuring of the US preparedness and response activities, public health workers are increasingly called on to activate a temporary round-the-clock staffing schedule. These workers may have to make key decisions that could significantly impact the health and safety of the public. The unique physiological demands of rotational shift work and night shift work have the potential to negatively impact decisionmaking ability. A responsible, evidence-based approach to scheduling applies the principles of circadian physiology, as well as unique individual physiologies and preferences. Optimal scheduling would use a clockwise (morning-afternoon-night) rotational schedule: limiting night shifts to blocks of 3, limiting shift duration to 8 hours, and allowing 3 days of recuperation after night shifts. PMID:17413074

  20. A small perturbation based optimization approach for the frequency placement of high aspect ratio wings

    NASA Astrophysics Data System (ADS)

    Goltsch, Mandy

    Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed

  1. Fuel moisture content estimation: a land-surface modelling approach applied to African savannas

    NASA Astrophysics Data System (ADS)

    Ghent, D.; Spessa, A.; Kaduk, J.; Balzter, H.

    2009-04-01

    Despite the importance of fire to the global climate system, in terms of emissions from biomass burning, ecosystem structure and function, and changes to surface albedo, current land-surface models do not adequately estimate key variables affecting fire ignition and propagation. Fuel moisture content (FMC) is considered one of the most important of these variables (Chuvieco et al., 2004). Biophysical models, with appropriate plant functional type parameterisations, are the most viable option to adequately predict FMC over continental scales at high temporal resolution. However, the complexity of plant-water interactions, and the variability associated with short-term climate changes, means it is one of the most difficult fire variables to quantify and predict. Our work attempts to resolve this issue using a combination of satellite data and biophysical modelling applied to Africa. The approach we take is to represent live FMC as a surface dryness index; expressed as the ratio between the Normalised Difference Vegetation Index (NDVI) and land-surface temperature (LST). It has been argued in previous studies (Sandholt et al., 2002; Snyder et al., 2006), that this ratio displays a statistically stronger correlation to FMC than either of the variables, considered separately. In this study, simulated FMC is constrained through the assimilation of remotely sensed LST and NDVI data into the land-surface model JULES (Joint-UK Land Environment Simulator). Previous modelling studies of fire activity in Africa savannas, such as Lehsten et al. (2008), have reported significant levels of uncertainty associated with the simulations. This uncertainty is important because African savannas are among some of the most frequently burnt ecosystems and are a major source of greenhouse trace gases and aerosol emissions (Scholes et al., 1996). Furthermore, regional climate model studies indicate that many parts of the African savannas will experience drier and warmer conditions in future

  2. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  3. Characteristics of Computational Thinking about the Estimation of the Students in Mathematics Classroom Applying Lesson Study and Open Approach

    ERIC Educational Resources Information Center

    Promraksa, Siwarak; Sangaroon, Kiat; Inprasitha, Maitree

    2014-01-01

    The objectives of this research were to study and analyze the characteristics of computational thinking about the estimation of the students in mathematics classroom applying lesson study and open approach. Members of target group included 4th grade students of 2011 academic year of Choomchon Banchonnabot School. The Lesson plan used for data…

  4. Exploring the Dynamics of Policy Interaction: Feedback among and Impacts from Multiple, Concurrently Applied Policy Approaches for Promoting Collaboration

    ERIC Educational Resources Information Center

    Fuller, Boyd W.; Vu, Khuong Minh

    2011-01-01

    The prisoner's dilemma and stag hunt games, as well as the apparent benefits of collaboration, have motivated governments to promote more frequent and effective collaboration through a variety of policy approaches. Sometimes, multiple kinds of policies are applied concurrently, and yet little is understood about how these policies might interact…

  5. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  6. Discriminating between rival biochemical network models: three approaches to optimal experiment design

    PubMed Central

    2010-01-01

    Background The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures. Results In this paper we develop methodologies for the optimal design of experiments with the aim of discriminating between different mathematical models of the same biological system. The first approach determines the 'best' initial condition that maximizes the L2 (energy) distance between the outputs of the rival models. In the second approach, we maximize the L2-distance of the outputs by designing the optimal external stimulus (input) profile of unit L2-norm. Our third method uses optimized structural changes (corresponding, for example, to parameter value changes reflecting gene knock-outs) to achieve the same goal. The numerical implementation of each method is considered in an example, signal processing in starving Dictyostelium amœbæ. Conclusions Model-based design of experiments improves both the reliability and the efficiency of biochemical network model discrimination. This opens the way to model invalidation, which can be used to perfect our understanding of biochemical networks. Our general problem formulation together with the three proposed experiment design methods give the practitioner new tools for a systems biology approach to

  7. An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.

    PubMed

    García-Pedrajas, Nicolás; Ortiz-Boyer, Domingo; Hervás-Martínez, César

    2006-05-01

    In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator. PMID:16343847

  8. On the practical convergence of coda-based correlations: a window optimization approach

    NASA Astrophysics Data System (ADS)

    Chaput, J.; Clerc, V.; Campillo, M.; Roux, P.; Knox, H.

    2016-02-01

    We present a novel optimization approach to improve the convergence of interstation coda correlation functions towards the medium's empirical Green's function. For two stations recording a series of impulsive events in a multiply scattering medium, we explore the impact of coda window selection through a Markov Chain Monte Carlo scheme, with the aim of generating a gather of correlation functions that is the most coherent and symmetric over events, thus recovering intuitive elements of the interstation Green's function without any nonlinear post-processing techniques. This approach is tested here for a 2-D acoustic finite difference model, where a much improved correlation function is obtained, as well as for a database of small impulsive icequakes recorded on Erebus Volcano, Antarctica, where similar robust results are shown. The average coda solutions, as deduced from the posterior probability distributions of the optimization, are further representative of the scattering strength of the medium, with stronger scattering resulting in a slightly delayed overall coda sampling. The recovery of singly scattered arrivals in the coda of correlation functions are also shown to be possible through this approach, and surface wave reflections from outer craters on Erebus volcano were mapped in this fashion. We also note that, due to the improvement of correlation functions over subsequent events, this approach can further be used to improve the resolution of passive temporal monitoring.

  9. Vertical and lateral flight optimization algorithm and missed approach cost calculation

    NASA Astrophysics Data System (ADS)

    Murrieta Mendoza, Alejandro

    Flight trajectory optimization is being looked as a way of reducing flight costs, fuel burned and emissions generated by the fuel consumption. The objective of this work is to find the optimal trajectory between two points. To find the optimal trajectory, the parameters of weight, cost index, initial coordinates, and meteorological conditions along the route are provided to the algorithm. This algorithm finds the trajectory where the global cost is the most economical. The global cost is a compromise between fuel burned and flight time, this is determined using a cost index that assigns a cost in terms of fuel to the flight time. The optimization is achieved by calculating a candidate optimal cruise trajectory profile from all the combinations available in the aircraft performance database. With this cruise candidate profile, more cruises profiles are calculated taken into account the climb and descend costs. During cruise, step climbs are evaluated to optimize the trajectory. The different trajectories are compared and the most economical one is defined as the optimal vertical navigation profile. From the optimal vertical navigation profile, different lateral routes are tested. Taking advantage of the meteorological influence, the algorithm looks for the lateral navigation trajectory where the global cost is the most economical. That route is then selected as the optimal lateral navigation profile. The meteorological data was obtained from environment Canada. The new way of obtaining data from the grid from environment Canada proposed in this work resulted in an important computation time reduction compared against other methods such as bilinear interpolation. The algorithm developed here was evaluated in two different aircraft: the Lockheed L-1011 and the Sukhoi Russian regional jet. The algorithm was developed in MATLAB, and the validation was performed using Flight-Sim by Presagis and the FMS CMA-9000 by CMC Electronics -- Esterline. At the end of this work a

  10. A Structured Approach to Teaching Applied Problem Solving through Technology Assessment.

    ERIC Educational Resources Information Center

    Fischbach, Fritz A.; Sell, Nancy J.

    1986-01-01

    Describes an approach to problem solving based on real-world problems. Discusses problem analysis and definitions, preparation of briefing documents, solution finding techniques (brainstorming and synectics), solution evaluation and judgment, and implementation. (JM)

  11. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  12. Conceptual design optimization of rectilinear building frames: A knapsack problem approach

    NASA Astrophysics Data System (ADS)

    Sharafi, Pezhman; Teh, Lip H.; Hadi, Muhammad N. S.

    2015-10-01

    This article presents an automated technique for preliminary layout (conceptual design) optimization of rectilinear, orthogonal building frames in which the shape of the building plan, the number of bays and the size of unsupported spans are variables. It adopts the knapsack problem as the applied combinatorial optimization problem, and describes how the conceptual design optimization problem can be generally modelled as the unbounded multi-constraint multiple knapsack problem. It discusses some special cases, which can be modelled more efficiently as the single knapsack problem, the multiple-choice knapsack problem or the multiple knapsack problem. A knapsack contains sub-rectangles that define the floor plan and the location of columns. Particular conditions or preferences for the conceptual design can be incorporated as constraints on the knapsacks and/or sub-rectangles. A bi-objective knapsack problem is defined with the aim of obtaining a conceptual design having minimum cost and maximum plan regularity (minimum structural eccentricity). A multi-objective ant colony algorithm is formulated to solve the combinatorial optimization problem. A numerical example is included to demonstrate the application of the present method and the robustness of the algorithm.

  13. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λem ) of 358 nm after excitation (λex ) at 301 nm. The Taguchi orthogonal array L9 (3(4) ) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26456088

  14. Systematic analysis of protein–detergent complexes applying dynamic light scattering to optimize solutions for crystallization trials

    SciTech Connect

    Meyer, Arne; Hussein, Rana; Brognaro, Hevila

    2015-01-01

    Application of in situ dynamic light scattering to solutions of protein–detergent complexes permits characterization of these complexes in samples as small as 2 µl in volume. Detergents are widely used for the isolation and solubilization of membrane proteins to support crystallization and structure determination. Detergents are amphiphilic molecules that form micelles once the characteristic critical micelle concentration (CMC) is achieved and can solubilize membrane proteins by the formation of micelles around them. The results are presented of a study of micelle formation observed by in situ dynamic light-scattering (DLS) analyses performed on selected detergent solutions using a newly designed advanced hardware device. DLS was initially applied in situ to detergent samples with a total volume of approximately 2 µl. When measured with DLS, pure detergents show a monodisperse radial distribution in water at concentrations exceeding the CMC. A series of all-transn-alkyl-β-d-maltopyranosides, from n-hexyl to n-tetradecyl, were used in the investigations. The results obtained verify that the application of DLS in situ is capable of distinguishing differences in the hydrodynamic radii of micelles formed by detergents differing in length by only a single CH{sub 2} group in their aliphatic tails. Subsequently, DLS was applied to investigate the distribution of hydrodynamic radii of membrane proteins and selected water-insoluble proteins in presence of detergent micelles. The results confirm that stable protein–detergent complexes were prepared for (i) bacteriorhodopsin and (ii) FetA in complex with a ligand as examples of transmembrane proteins. A fusion of maltose-binding protein and the Duck hepatitis B virus X protein was added to this investigation as an example of a non-membrane-associated protein with low water solubility. The increased solubility of this protein in the presence of detergent could be monitored, as well as the progress of proteolytic

  15. Private pediatric neuropsychology practice multimodal treatment of ADHD: an applied approach.

    PubMed

    Beljan, Paul; Bree, Kathleen D; Reuter, Alison E F; Reuter, Scott D; Wingers, Laura

    2014-01-01

    As neuropsychologists and psychologists specializing in the assessment and treatment of pediatric mental health concerns, one of the most prominent diagnoses we encounter is attention-deficit hyperactivity disorder (ADHD). Following a pediatric neuropsychological evaluation, parents often request recommendations for treatment. This article addresses our approach to the treatment of ADHD from the private practice perspective. We will review our primary treatment methodology as well as integrative and alternative treatment approaches. PMID:25010085

  16. Ice particle mass-dimensional parameter retrieval and uncertainty analysis using an Optimal Estimation framework applied to in situ data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien

    2015-04-01

    The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.

  17. Experimental design applied to the optimization of pyrolysis and atomization temperatures for As measurement in water samples by GFAAS

    NASA Astrophysics Data System (ADS)

    Ávila, Akie K.; Araujo, Thiago O.; Couto, Paulo R. G.; Borges, Renata M. H.

    2005-10-01

    In general, research experimentation is often used mainly when new methodologies are being developed or existing ones are being improved. The characteristics of any method depend on its factors or components. The planning techniques and analysis of experiments are basically used to improve the analytical conditions of methods, to reduce experimental labour with the minimum of tests and to optimize the use of resources (reagents, time of analysis, availability of the equipment, operator time, etc). These techniques are applied by the identification of variables (control factors) of a process that have the most influence on the response of the parameters of interest, by attributing values to the influential variables of the process in order that the variability of response can be minimum, or the obtained value (quality parameter) be very close to the nominal value, and by attributing values to the influential variables of the process so that the effects of uncontrollable variables can be reduced. In this central composite design (CCD), four permanent modifiers (Pd, Ir, W and Rh) and one combined permanent modifier W+Ir were studied. The study selected two factors: pyrolysis and atomization temperatures at five different levels for all the possible combinations. The pyrolysis temperatures with different permanent modifiers varied from 600 °C to 1600 °C with hold times of 25 s, while atomization temperatures ranged between 1900 °C and 2280 °C. The characteristic masses for As were in the range of 31 pg to 81 pg. Assuming the best conditions obtained on CCD, it was possible to estimate the measurement uncertainty of As determination in water samples. The results showed that considering the main uncertainty sources such as the repetitivity of measurement inherent in the equipment, the calibration curve which evaluates the adjustment of the mathematical model to the results and the calibration standards concentrations, the values obtained were similar to international

  18. A Projector-Embedding Approach for Multiscale Coupled-Cluster Calculations Applied to Citrate Synthase.

    PubMed

    Bennie, Simon J; van der Kamp, Marc W; Pennifold, Robert C R; Stella, Martina; Manby, Frederick R; Mulholland, Adrian J

    2016-06-14

    Projector-based embedding has recently emerged as a robust multiscale method for the calculation of various electronic molecular properties. We present the coupling of projector embedding with quantum mechanics/molecular mechanics modeling and apply it for the first time to an enzyme-catalyzed reaction. Using projector-based embedding, we combine coupled-cluster theory, density-functional theory (DFT), and molecular mechanics to compute energies for the proton abstraction from acetyl-coenzyme A by citrate synthase. By embedding correlated ab initio methods in DFT we eliminate functional sensitivity and obtain high-accuracy profiles in a procedure that is straightforward to apply. PMID:27159381

  19. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied. PMID:20165093

  20. Multi-objective optimization approach for cost management during product design at the conceptual phase

    NASA Astrophysics Data System (ADS)

    Durga Prasad, K. G.; Venkata Subbaiah, K.; Narayana Rao, K.

    2014-03-01

    The effective cost management during the conceptual design phase of a product is essential to develop a product with minimum cost and desired quality. The integration of the methodologies of quality function deployment (QFD), value engineering (VE) and target costing (TC) could be applied to the continuous improvement of any product during product development. To optimize customer satisfaction and total cost of a product, a mathematical model is established in this paper. This model integrates QFD, VE and TC under multi-objective optimization frame work. A case study on domestic refrigerator is presented to show the performance of the proposed model. Goal programming is adopted to attain the goals of maximum customer satisfaction and minimum cost of the product.