Homotopic approach and pseudospectral method applied jointly to low thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Guo, Tieding; Jiang, Fanghua; Li, Junfeng
2012-02-01
The homotopic approach and the pseudospectral method are two popular techniques for low thrust trajectory optimization. A hybrid scheme is proposed in this paper by combining the above two together to cope with various difficulties encountered when they are applied separately. Explicitly, a smooth energy-optimal problem is first discretized by the pseudospectral method, leading to a nonlinear programming problem (NLP). Costates, especially their initial values, are then estimated from Karush-Kuhn-Tucker (KKT) multipliers of this NLP. Based upon these estimated initial costates, homotopic procedures are initiated efficiently and the desirable non-smooth fuel-optimal results are finally obtained by continuing the smooth energy-optimal results through a homotopic algorithm. Two main difficulties, one due to absence of reasonable initial costates when the homotopic procedures are being initiated and the other due to discontinuous bang-bang controls when the pseudospectral method is applied to the fuel-optimal problem, are both resolved successfully. Numerical results of two scenarios are presented in the end, demonstrating feasibility and well performance of this hybrid technique.
Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase
NASA Technical Reports Server (NTRS)
Born, G. J.; Kai, T.
1975-01-01
A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.
Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings
NASA Technical Reports Server (NTRS)
Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.
2000-01-01
Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.
Hill, K.
1988-06-01
The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.
Asplund, Erik; Kluener, Thorsten
2012-03-28
In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ({Dirac_h}/2{pi})=m{sub e}=e=a{sub 0}= 1, have been used unless otherwise stated.
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
Dynamic programming in applied optimization problems
NASA Astrophysics Data System (ADS)
Zavalishchin, Dmitry
2015-11-01
Features of the use dynamic programming in applied problems are investigated. In practice such problems as finding the critical paths in network planning and control, finding the optimal supply plan in transportation problem, objects territorial distribution are traditionally solved by special methods of operations research. It should be noted that the dynamic programming is not provided computational advantages, but facilitates changes and modifications of tasks. This follows from the Bellman's optimality principle. The features of the multistage decision processes construction in applied problems are provided.
Applying new optimization algorithms to more predictive control
Wright, S.J.
1996-03-01
The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.
Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Kuerklue, Elif
2004-01-01
We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.
Applying optimization software libraries to engineering problems
NASA Technical Reports Server (NTRS)
Healy, M. J.
1984-01-01
Nonlinear programming, preliminary design problems, performance simulation problems trajectory optimization, flight computer optimization, and linear least squares problems are among the topics covered. The nonlinear programming applications encountered in a large aerospace company are a real challenge to those who provide mathematical software libraries and consultation services. Typical applications include preliminary design studies, data fitting and filtering, jet engine simulations, control system analysis, and trajectory optimization and optimal control. Problem sizes range from single-variable unconstrained minimization to constrained problems with highly nonlinear functions and hundreds of variables. Most of the applications can be posed as nonlinearly constrained minimization problems. Highly complex optimization problems with many variables were formulated in the early days of computing. At the time, many problems had to be reformulated or bypassed entirely, and solution methods often relied on problem-specific strategies. Problems with more than ten variables usually went unsolved.
Cancer Behavior: An Optimal Control Approach
Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.
2009-01-01
With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
NASA Astrophysics Data System (ADS)
Mason, Graeme F.; Chu, Wen-Jang; Hetherington, Hoby P.
1997-05-01
In this report, a procedure to optimize inversion-recovery times, in order to minimize the uncertainty in the measuredT1from 2-point multislice images of the human brain at 4.1 T, is discussed. The 2-point, 40-slice measurement employed inversion-recovery delays chosen based on the minimization of noise-based uncertainties. For comparison of the measuredT1values and uncertainties, 10-point, 3-slice measurements were also acquired. The measuredT1values using the 2-point method were 814, 1361, and 3386 ms for white matter, gray matter, and cerebral spinal fluid, respectively, in agreement with the respectiveT1values of 817, 1329, and 3320 ms obtained using the 10-point measurement. The 2-point, 40-slice method was used to determine theT1in the cortical gray matter, cerebellar gray matter, caudate nucleus, cerebral peduncle, globus pallidus, colliculus, lenticular nucleus, base of the pons, substantia nigra, thalamus, white matter, corpus callosum, and internal capsule.
Multiobjective Optimization Using a Pareto Differential Evolution Approach
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.
HPC CLOUD APPLIED TO LATTICE OPTIMIZATION
Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong
2011-03-18
As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.
Optimal quantisation applied to digital holographic data
NASA Astrophysics Data System (ADS)
Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram
2005-06-01
Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realised, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantisation. We adapt the optimal signal quantisation technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Applying neural networks to optimize instrumentation performance
Start, S.E.; Peters, G.G.
1995-06-01
Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.
Perturbation approach applied to modal diffraction methods.
Bischoff, Joerg; Hehl, Karl
2011-05-01
Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698
Variable-complexity optimization applied to airfoil design
NASA Astrophysics Data System (ADS)
Thokala, Praveen; Martins, Joaquim R. R. A.
2007-04-01
Variable-complexity methods are applied to aerodynamic shape design problems with the objective of reducing the total computational cost of the optimization process. Two main strategies are employed: the use of different levels of fidelity in the analysis models (variable fidelity) and the use of different sets of design variables (variable parameterization). Variable-fidelity methods with three different types of corrections are implemented and applied to a set of two-dimensional airfoil optimization problems that use computational fluid dynamics for the analysis. Variable parameterization is also used to solve the same problems. Both strategies are shown to reduce the computational cost of the optimization.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Multiobjective optimization approach: thermal food processing.
Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R
2009-01-01
The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field. PMID:20492109
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Optimal Statistical Approach to Optoacoustic Image Reconstruction
NASA Astrophysics Data System (ADS)
Zhulina, Yulia V.
2000-11-01
An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139
A Multiple Approach to Evaluating Applied Academics.
ERIC Educational Resources Information Center
Wang, Changhua; Owens, Thomas
The Boeing Company is involved in partnerships with Washington state schools in the area of applied academics. Over the past 3 years, Boeing offered grants to 57 high schools to implement applied mathematics, applied communication, and principles of technology courses. Part 1 of this paper gives an overview of applied academics by examining what…
Optimization of coupled systems: A critical overview of approaches
NASA Technical Reports Server (NTRS)
Balling, R. J.; Sobieszczanski-Sobieski, J.
1994-01-01
A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.
Bayesian approach to global discrete optimization
Mockus, J.; Mockus, A.; Mockus, L.
1994-12-31
We discuss advantages and disadvantages of the Bayesian approach (average case analysis). We present the portable interactive version of software for continuous global optimization. We consider practical multidimensional problems of continuous global optimization, such as optimization of VLSI yield, optimization of composite laminates, estimation of unknown parameters of bilinear time series. We extend Bayesian approach to discrete optimization. We regard the discrete optimization as a multi-stage decision problem. We assume that there exists some simple heuristic function which roughly predicts the consequences of the decisions. We suppose randomized decisions. We define the probability of the decision by the randomized decision function depending on heuristics. We fix this function with exception of some parameters. We repeat the randomized decision several times at the fixed values of those parameters and accept the best decision as the result. We optimize the parameters of the randomized decision function to make the search more efficient. Thus we reduce the discrete optimization problem to the continuous problem of global stochastic optimization. We solve this problem by the Bayesian methods of continuous global optimization. We describe the applications to some well known An problems of discrete programming, such as knapsack, traveling salesman, and scheduling.
Noise tolerant illumination optimization applied to display devices
NASA Astrophysics Data System (ADS)
Cassarly, William J.; Irving, Bruce
2005-02-01
Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.
Applying SF-Based Genre Approaches to English Writing Class
ERIC Educational Resources Information Center
Wu, Yan; Dong, Hailin
2009-01-01
By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.
Approaches for Informing Optimal Dose of Behavioral Interventions
King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.
2015-01-01
Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964
BEM4I applied to shape optimization problems
NASA Astrophysics Data System (ADS)
Zapletal, Jan; Merta, Michal; Čermák, Martin
2016-06-01
Shape optimization problems are one of the areas where the boundary element method can be applied efficiently. We present the application of the BEM4I library developed at IT4Innovations to a class of free surface Bernoulli problems in 3D. Apart from the boundary integral formulation of the related state and adjoint boundary value problems we present an implementation of a general scheme for the treatment of similar problems.
An effective model for ergonomic optimization applied to a new automotive assembly line
NASA Astrophysics Data System (ADS)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-01
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.
NASA Astrophysics Data System (ADS)
Evtushenko, Yu. G.; Posypkin, M. A.
2013-02-01
The nonuniform covering method is applied to multicriteria optimization problems. The ɛ-Pareto set is defined, and its properties are examined. An algorithm for constructing an ɛ-Pareto set with guaranteed accuracy ɛ is described. The efficiency of implementing this approach is discussed, and numerical results are presented.
Applying a gaming approach to IP strategy.
Gasnier, Arnaud; Vandamme, Luc
2010-02-01
Adopting an appropriate IP strategy is an important but complex area, particularly in the pharmaceutical and biotechnology sectors, in which aspects such as regulatory submissions, high competitive activity, and public health and safety information requirements limit the amount of information that can be protected effectively through secrecy. As a result, and considering the existing time limits for patent protection, decisions on how to approach IP in these sectors must be made with knowledge of the options and consequences of IP positioning. Because of the specialized nature of IP, it is necessary to impart knowledge regarding the options and impact of IP to decision-makers, whether at the level of inventors, marketers or strategic business managers. This feature review provides some insight on IP strategy, with a focus on the use of a new 'gaming' approach for transferring the skills and understanding needed to make informed IP-related decisions; the game Patentopolis is discussed as an example of such an approach. Patentopolis involves interactive activities with IP-related business decisions, including the exploitation and enforcement of IP rights, and can be used to gain knowledge on the impact of adopting different IP strategies. PMID:20127561
Applying simulation to optimize plastic molded optical parts
NASA Astrophysics Data System (ADS)
Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris
2012-10-01
Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.
Quantum Resonance Approach to Combinatorial Optimization
NASA Technical Reports Server (NTRS)
Zak, Michail
1997-01-01
It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.
Applied topology optimization of vibro-acoustic hearing instrument models
NASA Astrophysics Data System (ADS)
Søndergaard, Morten Birkmose; Pedersen, Claus B. W.
2014-02-01
Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.
Mathematical Modelling: A New Approach to Teaching Applied Mathematics.
ERIC Educational Resources Information Center
Burghes, D. N.; Borrie, M. S.
1979-01-01
Describes the advantages of mathematical modeling approach in teaching applied mathematics and gives many suggestions for suitable material which illustrates the links between real problems and mathematics. (GA)
Pitfalls and optimal approaches to diagnose melioidosis.
Kingsley, Paul Vijay; Arunkumar, Govindakarnavar; Tipre, Meghan; Leader, Mark; Sathiakumar, Nalini
2016-06-01
Melioidosis is a severe and fatal infectious disease in the tropics and subtropics. It presents as a febrile illness with protean manifestation ranging from chronic localized infection to acute fulminant septicemia with dissemination of infection to multiple organs characterized by abscesses. Pneumonia is the most common clinical presentation. Because of the wide range of clinical presentations, physicians may often misdiagnose and mistreat the disease for tuberculosis, pneumonia or other pyogenic infections. The purpose of this paper is to present common pitfalls in diagnosis and provide optimal approaches to enable early diagnosis and prompt treatment of melioidosis. Melioidosis may occur beyond the boundaries of endemic areas. There is no pathognomonic feature specific to a diagnosis of melioidosis. In endemic areas, physicians need to expand the diagnostic work-up to include melioidosis when confronted with clinical scenarios of pyrexia of unknown origin, progressive pneumonia or sepsis. Radiological imaging is an integral part of the diagnostic workup. Knowledge of the modes of transmission and risk factors will add support in clinically suspected cases to initiate therapy. In situations of clinically highly probable or possible cases where laboratory bacteriological confirmation is not possible, applying evidence-based criteria and empirical treatment with antimicrobials is recommended. It is of prime importance that patients undergo the full course of antimicrobial therapy to avoid relapse and recurrence. Early diagnosis and appropriate management is crucial in reducing serious complications leading to high mortality, and in preventing recurrences of the disease. Thus, there is a crucial need for promoting awareness among physicians at all levels and for improved diagnostic microbiology services. Further, the need for making the disease notifiable and/or initiating melioidosis registries in endemic countries appears to be compelling. PMID:27262061
A Multidisciplinary Optimization Platform Applied to Steel Constructions
NASA Astrophysics Data System (ADS)
Benanane, Abdelkader; Caperaa, Serge; Said Bekkouche, M.; Kerdal, Djamel
The design of the complex objects such as the buildings was always organized in levels of design since the preliminary phases to the final phases. If the approach by levels allows the designers to practise a precise and progressive definition to objects, it does not allow leading nevertheless to an optimal design of the projects. Consequently, we propose in this study, an original modelling allowing representing effectively all the process of the multidisciplinary design by the exchange of textual files (technical data and knowledge) between the different disciplines of the civil engineering (geotechnical studies, reinforced concrete and structural studies). We use for the loops of optimization the Monte-Carlo method because of its great robustness since it is based on the use of random numbers and the statistical tools, because it also puts up with any form of a function objective and allows to hold account easily constraints of optimization. The studies of test cases carried out on simple structures emphasize very significant and very promising variations as well as the dimensioning that the global cost.
Multidisciplinary Approach to Linear Aerospike Nozzle Optimization
NASA Technical Reports Server (NTRS)
Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.
1997-01-01
A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.
Multiobjective genetic approach for optimal control of photoinduced processes
Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique
2007-08-15
We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.
Camara, Malick; Breuil, Philippe; Briand, Danick; Viricelle, Jean-Paul; Pijolat, Christophe; de Rooij, Nico F
2015-04-21
This paper presents the optimization of a micro gas preconcentrator (μ-GP) system applied to atmospheric pollution monitoring, with the help of a complete modeling of the preconcentration cycle. Two different approaches based on kinetic equations are used to illustrate the behavior of the micro gas preconcentrator for given experimental conditions. The need for high adsorption flow and heating rate and for low desorption flow and detection volume is demonstrated in this paper. Preliminary to this optimization, the preconcentration factor is discussed and a definition is proposed. PMID:25810264
Yi, Q; Quinlan, D
2004-03-05
Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.
A Bayesian approach to optimizing cryopreservation protocols
2015-01-01
Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach. PMID:26131379
Optimization approaches for planning external beam radiotherapy
NASA Astrophysics Data System (ADS)
Gozbasi, Halil Ozan
Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can
LP based approach to optimal stable matchings
Teo, Chung-Piaw; Sethuraman, J.
1997-06-01
We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.
Optimization of a Solar Photovoltaic Applied to Greenhouses
NASA Astrophysics Data System (ADS)
Nakoul, Z.; Bibi-Triki, N.; Kherrous, A.; Bessenouci, M. Z.; Khelladi, S.
The global energy consumption and in our country is increasing. The bulk of world energy comes from fossil fuels, whose reserves are doomed to exhaustion and are the leading cause of pollution and global warming through the greenhouse effect. This is not the case of renewable energy that are inexhaustible and from natural phenomena. For years, unanimously, solar energy is in the first rank of renewable energies .The study of energetic aspect of a solar power plant is the best way to find the optimum of its performances. The study on land with real dimensions requires a long time and therefore is very costly, and more results are not always generalizable. To avoid these drawbacks we opted for a planned study on computer only, using the software 'Matlab' by modeling different components for a better sizing and simulating all energies to optimize profitability taking into account the cost. The result of our work applied to sites of Tlemcen and Bouzareah led us to conclude that the energy required is a determining factor in the choice of components of a PV solar power plant.
Optimal trading strategies—a time series approach
NASA Astrophysics Data System (ADS)
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
A Simulation Optimization Approach to Epidemic Forecasting.
Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V
2013-01-01
Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
Essays on Applied Resource Economics Using Bioeconomic Optimization Models
NASA Astrophysics Data System (ADS)
Affuso, Ermanno
With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.
Mixed finite element formulation applied to shape optimization
NASA Technical Reports Server (NTRS)
Rodrigues, Helder; Taylor, John E.; Kikuchi, Noboru
1988-01-01
The development presented introduces a general form of mixed formulation for the optimal shape design problem. The associated optimality conditions are easily obtained without resorting to highly elaborate mathematical developments. Also, the physical significance of the adjoint problem is clearly defined with this formulation.
Optimality of collective choices: a stochastic approach.
Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L
2003-09-01
Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251
Optimized perturbation theory applied to factorization scheme dependence
NASA Astrophysics Data System (ADS)
Stevenson, P. M.; Politzer, H. David
We reconsider the application of the "optimization" procedure to the problem of factorization scheme dependence in finite-order QCD calculations. The main difficulty encountered in a previous analysis disappears once an algebraic error is corrected.
An Optimal Guidance Law Applied to Quadrotor Using LQR Method
NASA Astrophysics Data System (ADS)
Jafari, Hamidreza; Zareh, Mehran; Roshanian, Jafar; Nikkhah, Amirali
The optimal guidance law of an autonomous four-rotor helicopter, called the Quadrotor, using linear quadratic regulators (LQR) is presented in this paper. The dynamic equations of the Quadrotor are considered nonlinear so to find an LQR controller, it is necessary that these equations be linearized in different operation points. Due to importance of energy consumption in Quadrotors, minimum energy is selected as the optimal criteria.
Das, B; Meirovitch, H; Navon, I M
2003-07-30
Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
A Global Optimization Approach to Multi-Polarity Sentiment Analysis
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
Applying a Constructivist and Collaborative Methodological Approach in Engineering Education
ERIC Educational Resources Information Center
Moreno, Lorenzo; Gonzalez, Carina; Castilla, Ivan; Gonzalez, Evelio; Sigut, Jose
2007-01-01
In this paper, a methodological educational proposal based on constructivism and collaborative learning theories is described. The suggested approach has been successfully applied to a subject entitled "Computer Architecture and Engineering" in a Computer Science degree in the University of La Laguna in Spain. This methodology is supported by two…
Focus Groups: A Practical and Applied Research Approach for Counselors
ERIC Educational Resources Information Center
Kress, Victoria E.; Shoffner, Marie F.
2007-01-01
Focus groups are becoming a popular research approach that counselors can use as an efficient, practical, and applied method of gathering information to better serve clients. In this article, the authors describe focus groups and their potential usefulness to professional counselors and researchers. Practical implications related to the use of…
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
Genetic algorithm parameter optimization: applied to sensor coverage
NASA Astrophysics Data System (ADS)
Sahin, Ferat; Abbate, Giuseppe
2004-08-01
Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
Applying Genetic Algorithms To Query Optimization in Document Retrieval.
ERIC Educational Resources Information Center
Horng, Jorng-Tzong; Yeh, Ching-Chang
2000-01-01
Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)
A multiple objective optimization approach to aircraft control systems design
NASA Technical Reports Server (NTRS)
Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.
1979-01-01
The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.
Self-Adaptive Stepsize Search Applied to Optimal Structural Design
NASA Astrophysics Data System (ADS)
Nolle, L.; Bland, J. A.
Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.
Weaknesses in Applying a Process Approach in Industry Enterprises
NASA Astrophysics Data System (ADS)
Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena
2012-12-01
The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.
Applying a weed risk assessment approach to GM crops.
Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe
2014-12-01
Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants. PMID:24046097
Group Counseling Optimization: A Novel Approach
NASA Astrophysics Data System (ADS)
Eita, M. A.; Fahmy, M. M.
A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.
Robust Bayesian decision theory applied to optimal dosage.
Abraham, Christophe; Daurès, Jean-Pierre
2004-04-15
We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. PMID:15057878
Applying riding-posture optimization on bicycle frame design.
Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee
2015-11-01
Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle. PMID:26154206
An optimization approach and its application to compare DNA sequences
NASA Astrophysics Data System (ADS)
Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying
2015-02-01
Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.
New approaches to the design optimization of hydrofoils
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas
2015-11-01
Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.
A comparison of two closely-related approaches to aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Shubin, G. R.; Frank, P. D.
1991-01-01
Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.
Russian Loanword Adaptation in Persian; Optimal Approach
ERIC Educational Resources Information Center
Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat
2011-01-01
In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…
Hobbs, Robert F; Wahl, Richard L; Frey, Eric C; Kasamon, Yvette; Song, Hong; Huang, Peng; Jones, Richard J; Sgouros, George
2014-01-01
Combination treatment is a hallmark of cancer therapy. Although the rationale for combination radiopharmaceutical therapy was described in the mid ‘90s, such treatment strategies have only been implemented clinically recently, and without a rigorous methodology for treatment optimization. Radiobiological and quantitative imaging-based dosimetry tools are now available that enable rational implementation of combined targeted radiopharmaceutical therapy. Optimal implementation should simultaneously account for radiobiological normal organ tolerance while optimizing the ratio of two different radiopharmaceuticals required to maximize tumor control. We have developed such a methodology and applied it to hypothetical myeloablative treatment of non-hodgkin’s lymphoma (NHL) patients using 131I-tositumomab and 90Y-ibritumomab tiuxetan. Methods The range of potential administered activities (AA) is limited by the normal organ maximum tolerated biologic effective doses (MTBEDs) arising from the combined radiopharmaceuticals. Dose limiting normal organs are expected to be the lungs for 131I-tositumomab and the liver for 90Y-ibritumomab tiuxetan in myeloablative NHL treatment regimens. By plotting the limiting normal organ constraints as a function of the AAs and calculating tumor biological effective dose (BED) along the normal organ MTBED limits, the optimal combination of activities is obtained. The model was tested using previously acquired patient normal organ and tumor kinetic data and MTBED values taken from the literature. Results The average AA values based solely on normal organ constraints was (19.0 ± 8.2) GBq with a range of 3.9 – 36.9 GBq for 131I-tositumomab, and (2.77 ± 1.64) GBq with a range of 0.42 – 7.54 GBq for 90Y-ibritumomab tiuxetan. Tumor BED optimization results were calculated and plotted as a function of AA for 5 different cases, established using patient normal organ kinetics for the two radiopharmaceuticals. Results included AA ranges
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
Molecular Approaches for Optimizing Vitamin D Supplementation.
Carlberg, Carsten
2016-01-01
Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. PMID:26827955
MATERIAL SHAPE OPTIMIZATION FOR FIBER REINFORCED COMPOSITES APPLYING A DAMAGE FORMULATION
NASA Astrophysics Data System (ADS)
Kato, Junji; Ramm, Ekkehard; Terada, Kenjiro; Kyoya, Takashi
The present contribution deals with an optimization strategy of fiber reinforced composites. Although the methodical concept is very general we concentrate on Fiber Reinforced Concrete with a complex failure mechanism resulting from material brittleness of both constituents matrix and fibers. The purpose of the present paper is to improve the structural ductility of the fiber reinforced composites applying an optimization method with respect to the geometrical layout of continuous long textile fibers. The method proposed is achieved by applying a so-called embedded reinforcement formulation. This methodology is extended to a damage formulation in order to represent a realistic structural behavior. For the optimization problem a gradient-based optimization scheme is assumed. An optimality criteria method is applied because of its numerically high efficiency and robustness. The performance of the method is demonstrated by a series of numerical examples; it is verified that the ductility can be substantially improved.
Applying a Modified Triad Approach to Investigate Wastewater lines
Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.
2006-07-01
Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)
Scalar and Multivariate Approaches for Optimal Network Design in Antarctica
NASA Astrophysics Data System (ADS)
Hryniw, Natalia
Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1987-01-01
Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.
Stevenson's optimized perturbation theory applied to factorization and mass scheme dependence
NASA Astrophysics Data System (ADS)
David Politzer, H.
1982-01-01
The principles of the optimized perturbation theory proposed by Stevenson to deal with coupling constant scheme dependence are applied to the problem of factorization scheme dependence in inclusive hadron reactions. Similar considerations allow the optimization of problems with mass dependence. A serious shortcoming of the procedure, common to all applications, is discussed.
A system approach to aircraft optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.
[Approaches to the optimization of medical services for the population].
Babanov, S A
2001-01-01
Describes modern approaches to optimization of medical care of the population under conditions of finance deficiency. Expenditure cutting is evaluated from viewpoint of "proof" medicine (allotting finances for concrete patients and services). PMID:11515111
Optimization approaches to volumetric modulated arc therapy planning.
Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan
2015-03-01
Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed. PMID:25735291
Optimization approaches to volumetric modulated arc therapy planning
Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan
2015-03-15
Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.
RF cavity design exploiting a new derivative-free trust region optimization approach
Hassan, Abdel-Karim S.O.; Abdel-Malek, Hany L.; Mohamed, Ahmed S.A.; Abuelfadl, Tamer M.; Elqenawy, Ahmed E.
2014-01-01
In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929
RF cavity design exploiting a new derivative-free trust region optimization approach.
Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E
2015-11-01
In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929
Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.
2016-03-01
To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.
Optimality approaches to describe characteristic fluvial patterns on landscapes
Paik, Kyungrock; Kumar, Praveen
2010-01-01
Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257
An Efficient Approach to Obtain Optimal Load Factors for Structural Design
Bojórquez, Juan
2014-01-01
An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232
An efficient approach to obtain optimal load factors for structural design.
Bojórquez, Juan; Ruiz, Sonia E
2014-01-01
An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232
Annular flow optimization: A new integrated approach
Maglione, R.; Robotti, G.; Romagnoli, R.
1997-07-01
During the drilling stage of an oil and gas well the hydraulic circuit of the mud assumes great importance with respect to most of the numerous and various constituting parts (mostly in the annular sections). Each of them has some points to be satisfied in order to guarantee both the safety of the operations and the performance optimization of each of the single elements of the circuit. The most important tasks for the annular part of the drilling hydraulic circuit are the following: (1) Maximum available pressure to the last casing shoe; (2) avoid borehole wall erosions; and (3) guarantee the hole cleaning. A new integrated system considering all the elements of the annular part of the drilling hydraulic circuit and the constraints imposed from each of them has been realized. In this way the family of the flow parameters (mud rheology and pump rate) satisfying simultaneously all the variables of the annular section has been found. Finally two examples regarding a standard and narrow annular section (slim hole) will be reported, showing briefly all the steps of the calculations until reaching the optimum flow parameters family (for that operational condition of drilling) that satisfies simultaneous all the flow parameters limitations imposed by the elements of the annular section circuit.
Optimization methods applied to the aerodynamic design of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Bingham, Gene J.; Riley, Michael F.
1987-01-01
Described is a formal optimization procedure for helicopter rotor blade design which minimizes hover horsepower while assuring satisfactory forward flight performance. The approach is to couple hover and forward flight analysis programs with a general-purpose optimization procedure. The resulting optimization system provides a systematic evaluation of the rotor blade design variables and their interaction, thus reducing the time and cost of designing advanced rotor blades. The paper discusses the basis for and details of the overall procedure, describes the generation of advanced blade designs for representative Army helicopters, and compares design and design effort with those from the conventional approach which is based on parametric studies and extensive cross-plots.
Comparative Properties of Collaborative Optimization and Other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
Comparative Properties of Collaborative Optimization and other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.
Yan, Zheng; Wang, Jun; Li, Guocheng
2014-07-01
This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. PMID:24705545
NASA Astrophysics Data System (ADS)
Takemiya, Tetsushi
, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite
A data-intensive approach to mechanistic elucidation applied to chiral anion catalysis
Milo, Anat; Neel, Andrew J.; Toste, F. Dean; Sigman, Matthew S.
2015-01-01
Knowledge of chemical reaction mechanisms can facilitate catalyst optimization, but extracting that knowledge from a complex system is often challenging. Here we present a data-intensive method for deriving and then predictively applying a mechanistic model of an enantioselective organic reaction. As a validating case study, we selected an intramolecular dehydrogenative C-N coupling reaction, catalyzed by chiral phosphoric acid derivatives, in which catalyst-substrate association involves weak, non-covalent interactions. Little was previously understood regarding the structural origin of enantioselectivity in this system. Catalyst and substrate substituent effects were probed by systematic physical organic trend analysis. Plausible interactions between the substrate and catalyst that govern enantioselectivity were identified and supported experimentally, indicating that such an approach can afford an efficient means of leveraging mechanistic insight to optimize catalyst design. PMID:25678656
A simple approach for predicting time-optimal slew capability
NASA Astrophysics Data System (ADS)
King, Jeffery T.; Karpenko, Mark
2016-03-01
The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
Departures from optimality when pursuing multiple approach or avoidance goals.
Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon
2016-07-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record PMID:26963081
NASA Astrophysics Data System (ADS)
Bera, Sasadhar; Mukherjee, Indrajit
2010-10-01
Ensuring quality of a product is rarely based on observations of a single quality characteristic. Generally, it is based on observations of family of properties, so-called `multiple responses'. These multiple responses are often interacting and are measured in variety of units. Due to presence of interaction(s), overall optimal conditions for all the responses rarely result from isolated optimal condition of individual response. Conventional optimization techniques, such as design of experiment, linear and nonlinear programmings are generally recommended for single response optimization problems. Applying any of these techniques for multiple response optimization problem may lead to unnecessary simplification of the real problem with several restrictive model assumptions. In addition, engineering judgements or subjective ways of decision making may play an important role to apply some of these conventional techniques. In this context, a synergistic approach of desirability functions and metaheuristic technique is a viable alternative to handle multiple response optimization problems. Metaheuristics, such as simulated annealing (SA) and particle swarm optimization (PSO), have shown immense success to solve various discrete and continuous single response optimization problems. Instigated by those successful applications, this chapter assesses the potential of a Nelder-Mead simplex-based SA (SIMSA) and PSO to resolve varied multiple response optimization problems. The computational results clearly indicate the superiority of PSO over SIMSA for the selected problems.
Target-classification approach applied to active UXO sites
NASA Astrophysics Data System (ADS)
Shubitidze, F.; Fernández, J. P.; Shamatava, Irma; Barrowes, B. E.; O'Neill, K.
2013-06-01
This study is designed to illustrate the discrimination performance at two UXO active sites (Oklahoma's Fort Sill and the Massachusetts Military Reservation) of a set of advanced electromagnetic induction (EMI) inversion/discrimination models which include the orthonormalized volume magnetic source (ONVMS), joint diagonalization (JD), and differential evolution (DE) approaches and whose power and flexibility greatly exceed those of the simple dipole model. The Fort Sill site is highly contaminated by a mix of the following types of munitions: 37-mm target practice tracers, 60-mm illumination mortars, 75-mm and 4.5'' projectiles, 3.5'', 2.36'', and LAAW rockets, antitank mine fuzes with and without hex nuts, practice MK2 and M67 grenades, 2.5'' ballistic windshields, M2A1-mines with/without bases, M19-14 time fuzes, and 40-mm practice grenades with/without cartridges. The site at the MMR site contains targets of yet different sizes. In this work we apply our models to EMI data collected using the MetalMapper (MM) and 2 × 2 TEMTADS sensors. The data for each anomaly are inverted to extract estimates of the extrinsic and intrinsic parameters associated with each buried target. (The latter include the total volume magnetic source or NVMS, which relates to size, shape, and material properties; the former includes location, depth, and orientation). The estimated intrinsic parameters are then used for classification performed via library matching and the use of statistical classification algorithms; this process yielded prioritized dig-lists that were submitted to the Institute for Defense Analyses (IDA) for independent scoring. The models' classification performance is illustrated and assessed based on these independent evaluations.
Optimal purchasing of raw materials: A data-driven approach
Muteki, K.; MacGregor, J.F.
2008-06-15
An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.
A Surrogate Approach to the Experimental Optimization of Multielement Airfoils
NASA Technical Reports Server (NTRS)
Otto, John C.; Landman, Drew; Patera, Anthony T.
1996-01-01
The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.
A Model for Applying Lexical Approach in Teaching Russian Grammar.
ERIC Educational Resources Information Center
Gettys, Serafima
The lexical approach to teaching Russian grammar is explained, an instructional sequence is outlined, and a classroom study testing the effectiveness of the approach is reported. The lexical approach draws on research on cognitive psychology, second language acquisition theory, and research on learner language. Its bases in research and its…
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
Universal Approach to Optimal Photon Storage in Atomic Media
Gorshkov, Alexey V.; Andre, Axel; Lukin, Mikhail D.; Fleischhauer, Michael; Soerensen, Anders S.
2007-03-23
We present a universal physical picture for describing storage and retrieval of photon wave packets in a {lambda}-type atomic medium. This physical picture encompasses a variety of different approaches to pulse storage ranging from adiabatic reduction of the photon group velocity and pulse-propagation control via off-resonant Raman fields to photon-echo-based techniques. Furthermore, we derive an optimal control strategy for storage and retrieval of a photon wave packet of any given shape. All these approaches, when optimized, yield identical maximum efficiencies, which only depend on the optical depth of the medium.
Optimized variable source-profile approach for source apportionment
NASA Astrophysics Data System (ADS)
Marmur, Amit; Mulholland, James A.; Russell, Armistead G.
An expanded chemical mass balance (CMB) approach for PM 2.5 source apportionment is presented in which both the local source compositions and corresponding contributions are determined from ambient measurements and initial estimates of source compositions using a global-optimization mechanism. Such an approach can serve as an alternative to using predetermined (measured) source profiles, as traditionally used in CMB applications, which are not always representative of the region and/or time period of interest. Constraints based on ranges of typical source profiles are used to ensure that the compositions identified are representative of sources and are less ambiguous than the factors/sources identified by typical factor analysis (FA) techniques. Gas-phase data (SO 2, CO and NO y) are also used, as these data can assist in identifying sources. Impacts of identified sources are then quantified by minimizing the weighted-error between apportioned and measured levels of the fitting species. This technique was applied to a dataset of PM 2.5 measurements at the former Atlanta Supersite (Jefferson Street site), to apportion PM 2.5 mass into nine source categories. Good agreement is found when these source impacts are compared with those derived based on measured source profiles as well as those derived using a current FA technique, Positive Matrix Factorization. The proposed method can be used to assess the representativeness of measured source-profiles and to help identify those profiles that may be in significant error, as well as to quantify uncertainties in source-impact estimates, due in part to uncertainties in source compositions.
Geomorphological Approach to Glacial and Snow Modeling applied to Hydrology
NASA Astrophysics Data System (ADS)
Gsell, P.; Le Moine, N.; Ribstein, P.
2012-12-01
Hydrological modeling of mountainous watershed has specific problems due to the effect of ice and snow cover in a certain range of altitude. The representation of the snow and ice storage dyanmics is a main issue for the understanding of mountainous hydrosystems mechanisms for future and past climate. That's also an operational concern for watersheds equipped with hydroelectric dams, whose dimensioning and electric capacity evaluation rely on a good understanding of ice-snow dynamics, in particular for a lapse of several years. The objective of the study is to get ahead, at a theoretical view, in a way in between classical representation used in hydrological models (infinity of ice stock) and 3D ice tongues modeling describing explicitly viscous glacier evolution at a river basin scale. Geomorphology will be used in this approach. Noticing that glaciers, at a catchment scale, take the drainage system as a geometrical framework, an axe of our study lies on the coupling of the probabilistic description of the river network with determinist glacier models using concepts that already have been used in hydrology modeling like Geomorphological Instantaneous Unitary Hydrogram. By analogy, a simplified glacier model (Shallow Ice Approximation or Minimal Glacier Models) will be put together as a transfer function to simulate large scale ablation and ice front dynamics. In our study, we analyze the distribution of upstream area for a dataset of 78 river basins in the Southern Rocky Mountains. In a certain range of scale and under a few assumptions, we use a statistic model for river networks description that we adapt by considering relief by linking hypsometry and morphology. The model developed P(A>a,z) allow us to identify any site of the river network from a DEM analysis via elevation z and upstream area a fields with the help of 2 parameters. The 3D consideration may be relevant for hydrologic implications as production function usually increases with relief. This model
A general optimization method applied to a vdW-DF functional for water
NASA Astrophysics Data System (ADS)
Fritz, Michelle; Soler, Jose M.; Fernandez-Serra, Marivi
In particularly delicate systems, like liquid water, ab initio exchange and correlation functionals are simply not accurate enough for many practical applications. In these cases, fitting the functional to reference data is a sensible alternative to empirical interatomic potentials. However, a global optimization requires functional forms that depend on many parameters and the usual trial and error strategy becomes cumbersome and suboptimal. We have developed a general and powerful optimization scheme called data projection onto parameter space (DPPS) and applied it to the optimization of a van der Waals density functional (vdW-DF) for water. In an arbitrarily large parameter space, DPPS solves for vector of unknown parameters for a given set of known data, and poorly sampled subspaces are determined by the physically-motivated functional shape of ab initio functionals using Bayes' theory. We present a new GGA exchange functional that has been optimized with the DPPS method for 1-body, 2-body, and 3-body energies of water systems and results from testing the performance of the optimized functional when applied to the calculation of ice cohesion energies and ab initio liquid water simulations. We found that our optimized functional improves the description of both liquid water and ice when compared to other versions of GGA exchange.
Optimized perturbation theory applied to jet cross sections in e + e - annihilation
NASA Astrophysics Data System (ADS)
Kramer, G.; Lampe, B.
1988-03-01
The optimized perturbation theory proposed by Stevenson to deal with coupling constant scheme dependence is applied to the calculation of π tot and jet multiplicities in e + e - annihilation. The results are compared with those of simple perturbation theory and with recent experimental cluster multiplicities.
NASA Astrophysics Data System (ADS)
Tsuchiya, Takeshi; Ishii, Hirokazu; Uchida, Junichi; Gomi, Hiromi; Matayoshi, Naoki; Okuno, Yoshinori
This study aims to obtain the optimal flights of a helicopter that reduce ground noise during landing approach with an optimization technique, and to conduct flight tests for confirming the effectiveness of the optimal solutions. Past experiments of Japan Aerospace Exploration Agency (JAXA) show that the noise of a helicopter varies significantly according to its flight conditions, especially depending on the flight path angle. We therefore build a simple noise model for a helicopter, in which the level of the noise generated from a point sound source is a function only of the flight path angle. Using equations of motion for flight in a vertical plane, we define optimal control problems for minimizing noise levels measured at points on the ground surface, and obtain optimal controls for specified initial altitudes, flight constraints, and wind conditions. The obtained optimal flights avoid the flight path angle which generates large noise and decrease the flight time, which are different from conventional flight. Finally, we verify the validity of the optimal flight patterns through flight experiments. The actual flights following the optimal paths resulted in noise reduction, which shows the effectiveness of the optimization.
The optimality of potential rescaling approaches in land data assimilation
Technology Transfer Automated Retrieval System (TEKTRAN)
It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...
Successive linear optimization approach to the dynamic traffic assignment problem
Ho, J.K.
1980-11-01
A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Technology Transfer Automated Retrieval System (TEKTRAN)
Drift of aerially applied crop protection and production materials is studied using a novel simulation-based approach. This new approach first studies many factors that can potentially contribute to downwind deposition from aerial spray application to narrow down the major contributing factors. An o...
PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning
Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew
2011-09-15
Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows
AI approach to optimal var control with fuzzy reactive loads
Abdul-Rahman, K.H.; Shahidehpour, S.M.; Daneshdoost, M.
1995-02-01
This paper presents an artificial intelligence (AI) approach to the optimal reactive power (var) control problem. The method incorporates the reactive load uncertainty in optimizing the overall system performance. The artificial neural network (ANN) enhanced by fuzzy sets is used to determine the memberships of control variables corresponding to the given load values. A power flow solution will determine the corresponding state of the system. Since the resulting system state may not be feasible in real-time, a heuristic method based on the application of sensitivities in expert system is employed to refine the solution with minimum adjustments of control variables. Test cases and numerical results demonstrate the applicability of the proposed approach. Simplicity, processing speed and ability to model load uncertainties make this approach a viable option for on-line var control.
Effects of optimism on creativity under approach and avoidance motivation
Icekson, Tamar; Roskes, Marieke; Moran, Simone
2014-01-01
Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. PMID:26362314
Shape Optimization and Supremal Minimization Approaches in Landslides Modeling
Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas
2005-10-15
The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.
Applying Digital Sensor Technology: A Problem-Solving Approach
ERIC Educational Resources Information Center
Seedhouse, Paul; Knight, Dawn
2016-01-01
There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…
Defining and Applying a Functionality Approach to Intellectual Disability
ERIC Educational Resources Information Center
Luckasson, R.; Schalock, R. L.
2013-01-01
Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…
Teaching Social Science Research: An Applied Approach Using Community Resources.
ERIC Educational Resources Information Center
Gilliland, M. Janice; And Others
A four-week summer project for 100 rural tenth graders in the University of Alabama's Biomedical Sciences Preparation Program (BioPrep) enabled students to acquire and apply social sciences research skills. The students investigated drinking water quality in three rural Alabama counties by interviewing local officials, health workers, and…
Experimental and applied approaches to control Salmonella in broiler processing
Technology Transfer Automated Retrieval System (TEKTRAN)
Control of Salmonella on poultry meat should ideally include efforts from the breeder farm to the fully processed and further processed product on through consumer education. In the U.S. regulatory scrutiny is often applied at the chill tank. Therefore, processing parameters are an important compo...
Optimal control of underactuated mechanical systems: A geometric approach
NASA Astrophysics Data System (ADS)
Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela
2010-08-01
In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.
Adaptive Wing Camber Optimization: A Periodic Perturbation Approach
NASA Technical Reports Server (NTRS)
Espana, Martin; Gilyard, Glenn
1994-01-01
Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.
Sequential activation of metabolic pathways: a dynamic optimization approach.
Oyarzún, Diego A; Ingalls, Brian P; Middleton, Richard H; Kalamatianos, Dimitrios
2009-11-01
The regulation of cellular metabolism facilitates robust cellular operation in the face of changing external conditions. The cellular response to this varying environment may include the activation or inactivation of appropriate metabolic pathways. Experimental and numerical observations of sequential timing in pathway activation have been reported in the literature. It has been argued that such patterns can be rationalized by means of an underlying optimal metabolic design. In this paper we pose a dynamic optimization problem that accounts for time-resource minimization in pathway activation under constrained total enzyme abundance. The optimized variables are time-dependent enzyme concentrations that drive the pathway to a steady state characterized by a prescribed metabolic flux. The problem formulation addresses unbranched pathways with irreversible kinetics. Neither specific reaction kinetics nor fixed pathway length are assumed.In the optimal solution, each enzyme follows a switching profile between zero and maximum concentration, following a temporal sequence that matches the pathway topology. This result provides an analytic justification of the sequential activation previously described in the literature. In contrast with the existent numerical approaches, the activation sequence is proven to be optimal for a generic class of monomolecular kinetics. This class includes, but is not limited to, Mass Action, Michaelis-Menten, Hill, and some Power-law models. This suggests that sequential enzyme expression may be a common feature of metabolic regulation, as it is a robust property of optimal pathway activation. PMID:19412635
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y
2016-01-01
It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces. PMID:27625978
Laser therapy applying the differential approaches and biophotometrical elements
NASA Astrophysics Data System (ADS)
Mamedova, F. M.; Akbarova, Ju. A.; Umarova, D. A.; Yudin, G. A.
1995-04-01
The aim of the present paper is the presentation of biophotometrical data obtained from various anatomic-topographical mouth areas to be used for the development of differential approaches to laser therapy in dentistry. Biophotometrical measurements were carried out using a portative biophotometer, as a portion of a multifunctional equipping system of laser therapy, acupuncture and biophotometry referred to as 'Aura-laser'. The results of biophotometrical measurements allow the implementation of differential approaches to laser therapy of parodontitis and mucous mouth tissue taking their clinic form and rate of disease into account.
RePAMO: Recursive Perturbation Approach for Multimodal Optimization
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Divya, Kotha; Mehta, Vivek Kumar; Deb, Kalyanmoy
2013-09-01
In this article, a strategy is presented to exploit classical algorithms for multimodal optimization problems, which recursively applies any suitable local optimization method, in the present case Nelder and Mead's simplex search method, in the search domain. The proposed method follows a systematic way to restart the algorithm. The idea of climbing the hills and sliding down to the neighbouring valleys is utilized. The implementation of the algorithm finds local minima as well as maxima. The concept of perturbing the minimum/maximum in several directions and restarting the algorithm for maxima/minima is introduced. The method performs favourably in comparison to other global optimization methods. The results of this algorithm, named RePAMO, are compared with the GA-clearing and ASMAGO techniques in terms of the number of function evaluations. Based on the results, it has been found that the RePAMO outperforms GA clearing and ASMAGO by a significant margin.
Applying Socio-Semiotics to Organizational Communication: A New Approach.
ERIC Educational Resources Information Center
Cooren, Francois
1999-01-01
Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…
Dialogical Approach Applied in Group Counselling: Case Study
ERIC Educational Resources Information Center
Koivuluhta, Merja; Puhakka, Helena
2013-01-01
This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…
Tennis: Applied Examples of a Game-Based Teaching Approach
ERIC Educational Resources Information Center
Crespo, Miguel; Reid, Machar M.; Miley, Dave
2004-01-01
In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…
Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.
ERIC Educational Resources Information Center
Deonanan, Carlton R.; Deonanan, Venus E.
This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…
Badawi, Mariam A; El-Khordagui, Labiba K
2014-07-16
Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. PMID:24704153
A Control Engineering Approach for Designing an Optimized Treatment Plan for Fibromyalgia
Deshpande, Sunil; Nandola, Naresh N.; Rivera, Daniel E.; Younger, Jarred
2011-01-01
Control engineering offers a systematic and efficient means for optimizing the effectiveness of behavioral interventions. In this paper, we present an approach to develop dynamical models and subsequently, hybrid model predictive control schemes for assigning optimal dosages of naltrexone as treatment for a chronic pain condition known as fibromyalgia. We apply system identification techniques to develop models from daily diary reports completed by participants of a naltrexone intervention trial. The dynamic model serves as the basis for applying model predictive control as a decision algorithm for automated dosage selection of naltrexone in the face of the external disturbances. The categorical/discrete nature of the dosage assignment creates a need for hybrid model predictive control (HMPC) schemes. Simulation results that include conditions of significant plant-model mismatch demonstrate the performance and applicability of hybrid predictive control for optimized adaptive interventions for fibromyalgia treatment involving naltrexone. PMID:22034548
SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization
Nazareth, D; Spaans, J
2014-06-15
Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
The GRG approach for large-scale optimization
Drud, A.
1994-12-31
The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.
Optimized probabilistic quantum processors: A unified geometric approach 1
NASA Astrophysics Data System (ADS)
Bergou, Janos; Bagan, Emilio; Feldman, Edgar
Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry
Particle Swarm and Ant Colony Approaches in Multiobjective Optimization
NASA Astrophysics Data System (ADS)
Rao, S. S.
2010-10-01
The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.
Computational approaches for microalgal biofuel optimization: a review.
Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh
2014-01-01
The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
A Model-Based Prognostics Approach Applied to Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
A Model Driven Engineering Approach Applied to Master Data Management
NASA Astrophysics Data System (ADS)
Menet, Ludovic; Lamolle, Myriam
The federation of data sources and the definition of pivot models are strongly interrelated topics. This paper explores a mediation solution based on XML architecture and the concept of Master Data Management. In this solution, pivot models use the standard XML Schema allowing the definition of complex data structures. The introduction of a MDE approach is a means to make modeling easier. We use UML as an abstract modeling layer. UML is a modeling object language, which is more and more used and recognized as a standard in the software engineering field, which makes it an ideal candidate for the modeling of XML Schema models. In this purpose we introduce features of the UML formalism, through profiles, to facilitate the definition and the exchange of models.
Total Risk Approach in Applying PRA to Criticality Safety
Huang, S T
2005-03-24
As nuclear industry continues marching from an expert-base support to more procedure-base support, it is important to revisit the total risk concept to criticality safety. A key objective of criticality safety is to minimize total criticality accident risk. The purpose of this paper is to assess key constituents of total risk concept pertaining to criticality safety from an operations support perspective and to suggest a risk-informed means of utilizing criticality safety resources for minimizing total risk. A PRA methodology was used to assist this assessment. The criticality accident history was assessed to provide a framework for our evaluation. In supporting operations, the work of criticality safety engineers ranges from knowing the scope and configurations of a proposed operation, performing criticality hazards assessment to derive effective controls, assisting in training operators, response to floor questions, surveillance to ensure implementation of criticality controls, and response to criticality mishaps. In a compliance environment, the resource of criticality safety engineers is increasingly being directed towards tedious documentation effort to meet some regulatory requirements to the effect of weakening the floor support for criticality safety. By applying a fault tree model to identify the major contributors of criticality accidents, a total risk picture is obtained to address relative merits of various actions. Overall, human failure is the key culprit in causing criticality accidents. Factors such as failure to follow procedures, lacks of training, lack of expert support at the floor level etc. are main contributors. Other causes may include lack of effective criticality controls such as inadequate criticality safety evaluation. Not all of the causes are equally important in contributing to criticality mishaps. Applying the limited resources to strengthen the weak links would reduce risk more than continuing emphasis on the strong links of
[Statistical Process Control applied to viral genome screening: experimental approach].
Reifenberg, J M; Navarro, P; Coste, J
2001-10-01
During the National Multicentric Study concerning the introduction of NAT for HCV and HIV-1 viruses in blood donation screening which was supervised by the Medical and Scientific departments of the French Blood Establishment (Etablissement français du sang--EFS), Transcription-Mediated transcription Amplification (TMA) technology (Chiron/Gen Probe) was experimented in the Molecular Biology Laboratory of Montpellier, EFS Pyrénées-Méditerranée. After a preliminary phase of qualification of the material and training of the technicians, routine screening of homologous blood and apheresis donations using this technology was applied for two months. In order to evaluate the different NAT systems, exhaustive daily operations and data were registered. Among these, the luminescence results expressed as RLU of the positive and negative calibrators and the associated internal controls were analysed using Control Charts, Statistical Process Control methods, which allow us to display rapidly process drift and to anticipate the appearance of incidents. This study demonstrated the interest of these quality control methods, mainly used for industrial purposes, to follow and to increase the quality of any transfusion process. it also showed the difficulties of the post-investigations of uncontrolled sources of variations of a process which was experimental. Such tools are in total accordance with the new version of the ISO 9000 norms which are particularly focused on the use of adapted indicators for processes control, and could be extended to other transfusion activities, such as blood collection and component preparation. PMID:11729395
Applying a cloud computing approach to storage architectures for spacecraft
NASA Astrophysics Data System (ADS)
Baldor, Sue A.; Quiroz, Carlos; Wood, Paul
As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.
Applying the community partnership approach to human biology research.
Ravenscroft, Julia; Schell, Lawrence M; Cole, Tewentahawih'tha'
2015-01-01
Contemporary human biology research employs a unique skillset for biocultural analysis. This skillset is highly appropriate for the study of health disparities because disparities result from the interaction of social and biological factors over one or more generations. Health disparities research almost always involves disadvantaged communities owing to the relationship between social position and health in stratified societies. Successful research with disadvantaged communities involves a specific approach, the community partnership model, which creates a relationship beneficial for researcher and community. Paramount is the need for trust between partners. With trust established, partners share research goals, agree on research methods and produce results of interest and importance to all partners. Results are shared with the community as they are developed; community partners also provide input on analyses and interpretation of findings. This article describes a partnership-based, 20 year relationship between community members of the Akwesasne Mohawk Nation and researchers at the University at Albany. As with many communities facing health disparity issues, research with Native Americans and indigenous peoples generally is inherently politicized. For Akwesasne, the contamination of their lands and waters is an environmental justice issue in which the community has faced unequal exposure to, and harm by environmental toxicants. As human biologists engage in more partnership-type research, it is important to understand the long term goals of the community and what is at stake so the research circle can be closed and 'helicopter' style research avoided. PMID:25380288
Applying electrical utility least-cost approach to transportation planning
McCoy, G.A.; Growdon, K.; Lagerberg, B.
1994-09-01
Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.
New Approach to Ultrasonic Spectroscopy Applied to Flywheel Rotors
NASA Technical Reports Server (NTRS)
Harmon, Laura M.; Baaklini, George Y.
2002-01-01
Flywheel energy storage devices comprising multilayered composite rotor systems are being studied extensively for use in the International Space Station. A flywheel system includes the components necessary to store and discharge energy in a rotating mass. The rotor is the complete rotating assembly portion of the flywheel, which is composed primarily of a metallic hub and a composite rim. The rim may contain several concentric composite rings. This article summarizes current ultrasonic spectroscopy research of such composite rings and rims and a flat coupon, which was manufactured to mimic the manufacturing of the rings. Ultrasonic spectroscopy is a nondestructive evaluation (NDE) method for material characterization and defect detection. In the past, a wide bandwidth frequency spectrum created from a narrow ultrasonic signal was analyzed for amplitude and frequency changes. Tucker developed and patented a new approach to ultrasonic spectroscopy. The ultrasonic system employs a continuous swept-sine waveform and performs a fast Fourier transform on the frequency spectrum to create the spectrum resonance spacing domain, or fundamental resonant frequency. Ultrasonic responses from composite flywheel components were analyzed at Glenn to assess this NDE technique for the quality assurance of flywheel applications.
González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M
2012-10-01
Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment. PMID:22806266
Portfolio optimization in enhanced index tracking with goal programming approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin
2014-09-01
Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.
General approach and scope. [rotor blade design optimization
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.
Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils
NASA Technical Reports Server (NTRS)
Lee, Byung Joon; Liou, Meng-Sing
2012-01-01
This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.
Mouse genetic approaches applied to the normal tissue radiation response
Haston, Christina K.
2012-01-01
The varying responses of inbred mouse models to radiation exposure present a unique opportunity to dissect the genetic basis of radiation sensitivity and tissue injury. Such studies are complementary to human association studies as they permit both the analysis of clinical features of disease, and of specific variants associated with its presentation, in a controlled environment. Herein I review how animal models are studied to identify specific genetic variants influencing predisposition to radiation-induced traits. Among these radiation-induced responses are documented strain differences in repair of DNA damage and in extent of tissue injury (in the lung, skin, and intestine) which form the base for genetic investigations. For example, radiation-induced DNA damage is consistently greater in tissues from BALB/cJ mice, than the levels in C57BL/6J mice, suggesting there may be an inherent DNA damage level per strain. Regarding tissue injury, strain specific inflammatory and fibrotic phenotypes have been documented for principally, C57BL/6 C3H and A/J mice but a correlation among responses such that knowledge of the radiation injury in one tissue informs of the response in another is not evident. Strategies to identify genetic differences contributing to a trait based on inbred strain differences, which include linkage analysis and the evaluation of recombinant congenic (RC) strains, are presented, with a focus on the lung response to irradiation which is the only radiation-induced tissue injury mapped to date. Such approaches are needed to reveal genetic differences in susceptibility to radiation injury, and also to provide a context for the effects of specific genetic variation uncovered in anticipated clinical association studies. In summary, mouse models can be studied to uncover heritable variation predisposing to specific radiation responses, and such variations may point to pathways of importance to phenotype development in the clinic. PMID:22891164
Geophysical approaches applied in the ancient theatre of Demetriada, Volos
NASA Astrophysics Data System (ADS)
Sarris, Apostolos; Papadopoulos, Nikos; Déderix, Sylviane; Salvi, Maria-Christina
2013-08-01
The city of Demetriada was constructed around 294-292 BC and became a stronghold of the Macedonian navy fleet, whereas in the Roman period it experienced significant growth and blossoming. The ancient theatre of the town was constructed at the same time with the foundation of the city, without being used for 2 centuries (1st ce. BC - 1st ce. A.D.) and being completely abandoned after the 4th ce. A.D., to be used only as a quarry for extraction of building material for Christian basilicas in the area. The theatre was found in 1809 and excavations took place in various years since 1907. Geophysical approaches were exploited recently in an effort to map the subsurface of the surrounding area of the theatre and help the reconstruction works of it. Magnetic gradiometry, Ground Penetrating Radar (GPR) and Electrical Resistivity Tomogrpahy (ERT) techniques were employed for mapping the area of the orchestra and the scene of the theatre, together with the area extending to the south of the theatre. A number of features were recognized by the magnetic techniques including older excavation trenches and the pilar of the stoa of the proscenium. The different occupation phases of the area have been manifested through the employment of tomographic and stratigraphic geophysical techniques like three-dimensional ERT and GPR. Architectural orthogonal structures aligned in a S-N direction have been correlated to the already excavated buildings of the ceramic workshop. The workshop seems to expand in a large section of the area which was probably constructed after the final abandonment of the theatre.
Optimal multiyear management of a water supply system under uncertainty: Robust counterpart approach
NASA Astrophysics Data System (ADS)
Housh, Mashor; Ostfeld, Avi; Shamir, Uri
2011-10-01
In this paper, the robust counterpart (RC) approach (Ben-Tal et al., 2009) is applied to optimize management of a water supply system (WSS) fed from aquifers and desalination plants. The water is conveyed through a network to meet desired consumptions, where the aquifers recharges are uncertain. The objective is to minimize the net present value cost of multiyear operation, satisfying operational and physical constraints. The RC is a min-max guided approach, which converts the original problem into a deterministic equivalent problem, requiring only that the uncertain parameters resides within a user-defined uncertainty set. The robust policy obtained by the RC approach is compared with polices obtained by other decision-making approaches including stochastic approaches.
Applying ILT mask synthesis for co-optimizing design rules and DSA process characteristics
NASA Astrophysics Data System (ADS)
Dam, Thuc; Stanton, William
2014-03-01
During early stage development of a DSA process, there are many unknown interactions between design, DSA process, RET, and mask synthesis. The computational resolution of these unknowns can guide development towards a common process space whereby manufacturing success can be evaluated. This paper will demonstrate the use of existing Inverse Lithography Technology (ILT) to co-optimize the multitude of parameters. ILT mask synthesis will be applied to a varied hole design space in combination with a range of DSA model parameters under different illumination and RET conditions. The design will range from 40 nm pitch doublet to random DSA designs with larger pitches, while various effective DSA characteristics of shrink bias and corner smoothing will be assumed for the DSA model during optimization. The co-optimization of these design parameters and process characteristics under different SMO solutions and RET conditions (dark/bright field tones and binary/PSM mask types) will also help to provide a complete process mapping of possible manufacturing options. The lithographic performances for masks within the optimized parameter space will be generated to show a common process space with the highest possibility for success.
A second law approach to exhaust system optimization
Primus, R.J.
1984-01-01
A model has been constructed that applies second law analysis to a Fanno formulation of the exhaust process of a turbocharged diesel engine. The model has been used to quantify available energy destruction at the valve and in the manifold and to study the influence of various system parameters on the relative magnitude of these exhaust system losses. The model formulation and its application to the optimization of the exhaust manifold diameter is discussed. Data are then presented which address the influence of the manifold friction, turbine efficiency, turbine power extraction, valve flow area, compression ratio, speed, load and air-fuel ratio on the available energy destruction in the exhaust system.
SolOpt: A Novel Approach to Solar Rooftop Optimization
Lisell, L.; Metzger, I.; Dean, J.
2011-01-01
Traditionally Photovoltaic Technology (PV) and Solar Hot Water Technology (SHW) have been designed with separate design tools, making it difficult to determine the appropriate mix of PV and SHW. A new tool developed at the National Renewable Energy Laboratory changes how the analysis is conducted through an integrated approach based on the life cycle cost effectiveness of each system. With 10 inputs someone with only basic knowledge of the building can simulate energy production from PV and SHW, and predict the optimal sizes of the systems. The user can select from four optimization criteria currently available: Greenhouse Gas Reduction, Net-Present Value, Renewable Energy Production, and Discounted Payback Period. SolOpt provides unique analysis capabilities that aren't currently available in any other software programs. Validation results with industry accepted tools for both SHW and PV are presented.
Optimal approach to quantum communication using dynamic programming.
Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D
2007-10-30
Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states. PMID:17959783
A new integrated approach to seismic network optimization
NASA Astrophysics Data System (ADS)
Tramelli, A.; De Natale, G.; Troise, C.; Orazi, M.
2012-04-01
a new one, considering different earthquake positions and different noise levels for the station sites. The optimization for moment tensor solutions is also implemented, by formally defining the inverse problem in matrix form. The algorithms are then tested and applied to optimize the network of Campi Flegrei.
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Watts, Stephen R.; Garg, Sanjay
1995-01-01
This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.
Perspective: Codesign for materials science: An optimal learning approach
NASA Astrophysics Data System (ADS)
Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.
2016-05-01
A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.
Optimal active power dispatch by network flow approach
Carvalho, M.F. ); Soares, S.; Ohishi, T. )
1988-11-01
In this paper the optimal active power dispatch problem is formulated as a nonlinear capacitated network flow problem with additional linear constraints. Transmission flow limits and both Kirchhoff's laws are taken into account. The problem is solved by a Generalized Upper Bounding technique that takes advantage of the network flow structure of the problem. The new approach has potential applications on power systems problems such as economic dispatch, load supplying capability, minimum load shedding, and generation-transmission reliability. The paper also reviews the use of transportation models for power system analysis. A detailed illustrative example is presented.
Sturm, Monika; Richter, Stephan; Aregbe, Yetunde; Wellum, Roger; Prohaska, Thomas
2016-06-21
An optimized method is described for U/Pu separation and subsequent measurement of the amount contents of uranium isotopes by total evaporation (TE) TIMS with a double filament setup combined with filament carburization for age determination of plutonium samples. The use of carburized filaments improved the signal behavior for total evaporation TIMS measurements of uranium. Elevated uranium ion formation by passive heating during rhenium signal optimization at the start of the total evaporation measurement procedure was found to be a result from byproducts of the separation procedure deposited on the filament. This was avoided using carburized filaments. Hence, loss of sample before the actual TE data acquisition was prevented, and automated measurement sequences could be accomplished. Furthermore, separation of residual plutonium in the separated uranium fraction was achieved directly on the filament by use of the carburized filaments. Although the analytical approach was originally tailored to achieve reliable results only for the (238)Pu/(234)U, (239)Pu/(235)U, and (240)Pu/(236)U chronometers, the optimization of the procedure additionally allowed the use of the (242)Pu/(238)U isotope amount ratio as a highly sensitive indicator for residual uranium present in the sample, which is not of radiogenic origin. The sample preparation method described in this article has been successfully applied for the age determination of CRM NBS 947 and other sulfate and oxide plutonium samples. PMID:27240571
NASA Astrophysics Data System (ADS)
Mousavi, S. Jamshid; Shourian, M.
2010-03-01
Global optimization models in many problems suffer from high computational costs due to the need for performing high-fidelity simulation models for objective function evaluations. Metamodeling is a useful approach to dealing with this problem in which a fast surrogate model replaces the detailed simulation model. However, training of the surrogate model needs enough input-output data which in case of absence of observed data, each of them must be obtained by running the simulation model and may still cause computational difficulties. In this paper a new metamodeling approach called adaptive sequentially space filling (ASSF) is presented by which the regions in the search space that need more training data are sequentially identified and the process of design of experiments is performed adaptively. Performance of the ASSF approach is tested against a benchmark function optimization problem and optimum basin-scale water allocation problems, in which the MODSIM river basin decision support system is approximated. Results show the ASSF model with fewer actual function evaluations is able to find comparable solutions to other metamodeling techniques using random sampling and evolution control strategies.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.
1992-01-01
The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Silanization of glass chips—A factorial approach for optimization
NASA Astrophysics Data System (ADS)
Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.
2013-12-01
Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).
NASA Technical Reports Server (NTRS)
Brown, Jonathan M.; Petersen, Jeremy D.
2014-01-01
NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.
A statistical approach to optimizing concrete mixture design.
Ahmad, Shamsad; Alghamdi, Saeid A
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.
Correction of linear-array lidar intensity data using an optimal beam shaping approach
NASA Astrophysics Data System (ADS)
Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang
2016-08-01
The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.
A systems biology approach to radiation therapy optimization.
Brahme, Anders; Lind, Bengt K
2010-05-01
During the last 20 years, the field of cellular and not least molecular radiation biology has been developed substantially and can today describe the response of heterogeneous tumors and organized normal tissues to radiation therapy quite well. An increased understanding of the sub-cellular and molecular response is leading to a more general systems biological approach to radiation therapy and treatment optimization. It is interesting that most of the characteristics of the tissue infrastructure, such as the vascular system and the degree of hypoxia, have to be considered to get an accurate description of tumor and normal tissue responses to ionizing radiation. In the limited space available, only a brief description of some of the most important concepts and processes is possible, starting from the key functional genomics pathways of the cell that are not only responsible for tumor development but also responsible for the response of the cells to radiation therapy. The key mechanisms for cellular damage and damage repair are described. It is further more discussed how these processes can be brought to inactivate the tumor without severely damaging surrounding normal tissues using suitable radiation modalities like intensity-modulated radiation therapy (IMRT) or light ions. The use of such methods may lead to a truly scientific approach to radiation therapy optimization, particularly when invivo predictive assays of radiation responsiveness becomes clinically available at a larger scale. Brief examples of the efficiency of IMRT are also given showing how sensitive normal tissues can be spared at the same time as highly curative doses are delivered to a tumor that is often radiation resistant and located near organs at risk. This new approach maximizes the probability to eradicate the tumor, while at the same time, adverse reactions in sensitive normal tissues are as far as possible minimized using IMRT with photons and light ions. PMID:20191284
A simple approach to metal hydride alloy optimization
NASA Technical Reports Server (NTRS)
Lawson, D. D.; Miller, C.; Landel, R. F.
1976-01-01
Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.
A stochastic optimization approach for integrated urban water resource planning.
Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X
2013-01-01
Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change. PMID:23552255
New Approaches to HSCT Multidisciplinary Design and Optimization
NASA Technical Reports Server (NTRS)
Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.
1996-01-01
The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.
Discovery and Optimization of Materials Using Evolutionary Approaches.
Le, Tu C; Winkler, David A
2016-05-25
Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499
Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Numerical optimization approaches of single-pulse conduction laser welding by beam shape tailoring
NASA Astrophysics Data System (ADS)
Sundqvist, J.; Kaplan, A. F. H.; Shachaf, L.; Brodsky, A.; Kong, C.; Blackburn, J.; Assuncao, E.; Quintino, L.
2016-04-01
While circular laser beams are usually applied in laser welding, for certain applications tailoring of the laser beam shape, e.g. by diffractive optical elements, can optimize the process. A case where overlap conduction mode welding should be used to produce a C-shaped joint was studied. For the dimensions studied in this paper, the weld joint deviated significantly from the C-shape of the single-pulse laser beam. Because of the complex heat flow interactions, the process requires optimization. Three approaches for extracting quantitative indicators for understanding the essential heat flow contributions process and for optimizing the C-shape of the weld and of the laser beam were studied and compared. While integral energy properties through a control volume and temperature gradients at key locations only partially describe the heat flow behaviour, the geometrical properties of the melt pool isotherm proved to be the most reliable method for optimization. While pronouncing the C-ends was not sufficient, an additional enlargement of the laser beam produced the desired C-shaped weld joint. The approach is analysed and the potential for generalization is discussed.
Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Xie, Dongming; Liu, Dehua; Zhu, Haoli; Zhang, Jianan
2002-05-01
To optimize the fed-batch processes of glycerol fermentation in different reactor states, typical bioreactors including 500-mL shaking flask, 600-mL and 15-L airlift loop reactor, and 5-L stirred vessel were investigated. It was found that by reestimating the values of only two variable kinetic parameters associated with physical transport phenomena in a reactor, the macrokinetic model of glycerol fermentation proposed in previous work could describe well the batch processes in different reactor states. This variable kinetic parameter (VKP) approach was further applied to model-based optimization of discrete-pulse feed (DPF) strategies of both glucose and corn steep slurry for glycerol fed-batch fermentation. The experimental results showed that, compared with the feed strategies determined just by limited experimental optimization in previous work, the DPF strategies with VKPs adjusted could improve glycerol productivity at least by 27% in the scale-down and scale-up reactor states. The approach proposed appeared promising for further modeling and optimization of glycerol fermentation or the similar bioprocesses in larger scales. PMID:12049203
Model reduction for chemical kinetics: An optimization approach
Petzold, L.; Zhu, W.
1999-04-01
The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.
Approaches of Russian oil companies to optimal capital structure
NASA Astrophysics Data System (ADS)
Ishuk, T.; Ulyanova, O.; Savchitz, V.
2015-11-01
Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.
Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy
Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.
2014-01-01
Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283
[OPTIMAL APPROACH TO COMBINED TREATMENT OF PATIENTS WITH UROGENITAL PAPILLOMATOSIS].
Breusov, A A; Kulchavenya, E V; Brizhatyukl, E V; Filimonov, P N
2015-01-01
The review analyzed 59 sources of domestic and foreign literature on the use of immunomodulator izoprinozin in treating patients infected with human papilloma virus, and the results of their own experience. The high prevalence of HPV and its role in the development of cervical cancer are shown, the mechanisms of HPV development and the host protection from this infection are described. The authors present approaches to the treatment of HPV-infected patients with particular attention to izoprinozin. Isoprinosine belongs to immunomodulators with antiviral activity. It inhibits the replication of viral DNA and RNA by binding to cell ribosomes and changing their stereochemical structure. HPV infection, especially in the early stages, may be successfully cured till the complete elimination of the virus. Inosine Pranobex (izoprinozin) having dual action and the most abundant evidence base, may be recognized as the optimal treatment option. PMID:26859953
Structural Query Optimization in Native XML Databases: A Hybrid Approach
NASA Astrophysics Data System (ADS)
Haw, Su-Cheng; Lee, Chien-Sing
As XML (eXtensible Mark-up Language) is gaining its popularity in data exchange over the Web, querying XML data has become an important issue to be addressed. In native XML databases (NXD), XML documents are usually modeled as trees and XML queries are typically specified in path expression. The primitive structural relationships are Parent-Child (P-C), Ancestor-Descendant (A-D), sibling and ordered query. Thus, a suitable and compact labeling scheme is crucial to identify these relationships and henceforth to process the query efficiently. We propose a novel labeling scheme consisting of < self-level:parent> to support all these relationships efficiently. Besides, we adopt the decomposition-matching-merging approach for structural query processing and propose a hybrid query optimization technique, TwigINLAB to process and optimize the twig query evaluation. Experimental results indicate that TwigINLAB can process all types of XML queries 15% better than the TwigStack algorithm in terms of execution time in most test cases.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma
Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.
2015-01-15
An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.
An improved ant colony optimization approach for optimization of process planning.
Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin
2014-01-01
Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874
An Improved Ant Colony Optimization Approach for Optimization of Process Planning
Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin
2014-01-01
Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874
Gajić, A.; Radovanović, J. Milanović, V.; Indjin, D.; Ikonić, Z.
2014-02-07
A computational model for the optimization of the second order optical nonlinearities in GaInAs/AlInAs quantum cascade laser structures is presented. The set of structure parameters that lead to improved device performance was obtained through the implementation of the Genetic Algorithm. In the following step, the linear and second harmonic generation power were calculated by self-consistently solving the system of rate equations for carriers and photons. This rate equation system included both stimulated and simultaneous double photon absorption processes that occur between the levels relevant for second harmonic generation, and material-dependent effective mass, as well as band nonparabolicity, were taken into account. The developed method is general, in the sense that it can be applied to any higher order effect, which requires the photon density equation to be included. Specifically, we have addressed the optimization of the active region of a double quantum well In{sub 0.53}Ga{sub 0.47}As/Al{sub 0.48}In{sub 0.52}As structure and presented its output characteristics.
An integrated approach for optimal design of micro gas turbine combustors
NASA Astrophysics Data System (ADS)
Fuligno, Luca; Micheli, Diego; Poloni, Carlo
2009-06-01
The present work presents an approach for the optimized design of small gas turbine combustors, that integrates a 0-D code, CFD analyses and an advanced game theory multi-objective optimization algorithm. The output of the 0-D code is a baseline design of the combustor, given the required fuel characteristics, the basic geometry (tubular or annular) and the combustion concept (i.e. lean premixed primary zone or diffusive processes). For the optimization of the baseline design a simplified parametric CAD/mesher model is then defined and submitted to a CFD code. Free parameters of the optimization process are position and size of the liner hole arrays, their total area and the shape of the exit duct, while different objectives are the minimization of NOx emissions, pressure losses and combustor exit Pattern Factor. A 3D simulation of the optimized geometry completes the design procedure. As a first demonstrative example, the integrated design process was applied to a tubular combustion chamber with a lean premixed primary zone for a recuperative methane-fuelled small gas turbine of the 100 kW class.
NASA Astrophysics Data System (ADS)
Igeta, Hideki; Hasegawa, Mikio
Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.
Optimization of floodplain monitoring sensors through an entropy approach
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.
2012-04-01
To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and
MSE optimal bit-rate allocation in JPEG2000 Part 2 compression applied to a 3D data set
NASA Astrophysics Data System (ADS)
Kosheleva, Olga M.; Cabrera, Sergio D.; Usevitch, Bryan E.; Aguirre, Alberto; Vidal, Edward, Jr.
2004-10-01
A bit rate allocation (BRA) strategy is needed to optimally compress three-dimensional (3-D) data on a per-slice basis, treating it as a collection of two-dimensional (2-D) slices/components. This approach is compatible with the framework of JPEG2000 Part 2 which includes the option of pre-processing the slices with a decorrelation transform in the cross-component direction so that slices of transform coefficients are compressed. In this paper, we illustrate the impact of a recently developed inter-slice rate-distortion optimal bit-rate allocation approach that is applicable to this compression system. The approach exploits the MSE optimality of all JPEG2000 bit streams for all slices when each is produced in the quality progressive mode. Each bit stream can be used to produce a rate-distortion curve (RDC) for each slice that is MSE optimal at each bit rate of interest. The inter-slice allocation approach uses all RDCs for all slices to optimally select an overall optimal set of bit rates for all the slices using a constrained optimization procedure. The optimization is conceptually similar to Post-Compression Rate-Distortion optimization that is used within JPEG2000 to optimize bit rates allocated to codeblocks. Results are presented for two types of data sets: a 3-D computed tomography (CT) medical image, and a 3-D metereological data set derived from a particular modeling program. For comparison purposes, compression results are also illustrated for the traditional log-variance approach and for a uniform allocation strategy. The approach is illustrated using two decorrelation tranforms (the Karhunen Loeve transform, and the discrete wavelet transform) for which the inter-slice allocation scheme has the most impact.
Technology Transfer Automated Retrieval System (TEKTRAN)
The use of synthetic sex pheromones for mating disruption has been widely adopted as an environmentally safe alternative to broad-spectrum insecticides to control many lepidopteran pest species [1]. Among the controlled release devices for insect pheromones, hand-applied dispensers are the most comm...
Particle Swarm Optimization Applied to EEG Source Localization of Somatosensory Evoked Potentials.
Shirvany, Yazdan; Mahmood, Qaiser; Edelvik, Fredrik; Jakobsson, Stefan; Hedstrom, Anders; Persson, Mikael
2014-01-01
One of the most important steps in presurgical diagnosis of medically intractable epilepsy is to find the precise location of the epileptogenic foci. Electroencephalography (EEG) is a noninvasive tool commonly used at epilepsy surgery centers for presurgical diagnosis. In this paper, a modified particle swarm optimization (MPSO) method is used to solve the EEG source localization problem. The method is applied to noninvasive EEG recording of somatosensory evoked potentials (SEPs) for a healthy subject. A 1 mm hexahedra finite element volume conductor model of the subject's head was generated using T1-weighted magnetic resonance imaging data. Special consideration was made to accurately model the skull and cerebrospinal fluid. An exhaustive search pattern and the MPSO method were then applied to the peak of the averaged SEP data and both identified the same region of the somatosensory cortex as the location of the SEP source. A clinical expert independently identified the expected source location, further corroborating the source analysis methods. The MPSO converged to the global minima with significantly lower computational complexity compared to the exhaustive search method that required almost 3700 times more evaluations. PMID:24122569
Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.
2010-04-01
Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented
de Vries, Anton LM; de Groot, Marieke H; de Keijser, Jos; Kerkhof, Ad JFM
2014-01-01
Background The Internet is used increasingly for both suicide research and prevention. To optimize online assessment of suicidal patients, there is a need for short, good-quality tools to assess elevated risk of future suicidal behavior. Computer adaptive testing (CAT) can be used to reduce response burden and improve accuracy, and make the available pencil-and-paper tools more appropriate for online administration. Objective The aim was to test whether an item response–based computer adaptive simulation can be used to reduce the length of the Beck Scale for Suicide Ideation (BSS). Methods The data used for our simulation was obtained from a large multicenter trial from The Netherlands: the Professionals in Training to STOP suicide (PITSTOP suicide) study. We applied a principal components analysis (PCA), confirmatory factor analysis (CFA), a graded response model (GRM), and simulated a CAT. Results The scores of 505 patients were analyzed. Psychometric analyses showed the questionnaire to be unidimensional with good internal consistency. The computer adaptive simulation showed that for the estimation of elevation of risk of future suicidal behavior 4 items (instead of the full 19) were sufficient, on average. Conclusions This study demonstrated that CAT can be applied successfully to reduce the length of the Dutch version of the BSS. We argue that the use of CAT can improve the accuracy and the response burden when assessing the risk of future suicidal behavior online. Because CAT can be daunting for clinicians and applied scientists, we offer a concrete example of our computer adaptive simulation of the Dutch version of the BSS at the end of the paper. PMID:25213259
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks
NASA Astrophysics Data System (ADS)
De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio
2016-05-01
This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Applying the Cultural Formulation Approach to Career Counseling with Latinas/os
ERIC Educational Resources Information Center
Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen
2010-01-01
In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…
NASA Astrophysics Data System (ADS)
Ortiz Gómez, Natalia; Walker, Scott J. I.
Existent studies on the evolution of the space debris population show that both mitigation measures and active debris removal methods are necessary in order to prevent the current population from growing. Active debris removal methods, which require contact with the target, show complications if the target is rotating at high speeds. Observed rotations go up to 50 deg/s combined with precession and nutation motions. “Natural” rotational damping in upper stages has been observed for some debris objects. This phenomenon occurs due to the eddy currents induced by the Earth’s magnetic field in the predominantly conductive materials of these man made rotating objects. The idea presented in this paper is to submit the satellite to an enhanced magnetic field in order to subdue it and damp its rotation, thus allowing for its subsequent de-orbiting phase. The braking method that is proposed has the advantage of avoiding any kind of mechanical contact with the target. A deployable structure with a magnetic coil at its end is used to induce the necessary braking torques on the target. This way, the induced magnetic field is created far away from the chaseŕs main body avoiding undesirable effects on its instruments. This paper focuses on the overall design of the system and the parameters considered are: the braking time, the power required, the mass of the deployable structure and the magnetic coil system, the size of the coil, the materials selection and distance to the target. The different equations that link all these variables together are presented. Nevertheless, these equations lead to several variables which make it possible to approach the engineering design as an optimization problem. Given that only a few variables remain, no sophisticated numerical methods are called for, and a simple graphical approach can be used to display the optimum solutions. Some parameters are open to future refinements as the optimization problem must be contemplated globally in
NASA Astrophysics Data System (ADS)
Finley, James P.; Chaudhuri, Rajat K.; Freed, Karl F.
1996-07-01
High-order multireference perturbation theory is applied to the 1S states of the beryllium atom using a reference (model) space composed of the \\|1s22s2> and the \\|1s22p2> configuration-state functions (CSF's), a system that is known to yield divergent expansions using Mo/ller-Plesset and Epstein-Nesbet partitioning methods. Computations of the eigenvalues are made through 40th order using forced degeneracy (FD) partitioning and the recently introduced optimization (OPT) partitioning. The former forces the 2s and 2p orbitals to be degenerate in zeroth order, while the latter chooses optimal zeroth-order energies of the (few) most important states. Our methodology employs simple models for understanding and suggesting remedies for unsuitable choices of reference spaces and partitioning methods. By examining a two-state model composed of only the \\|1s22p2> and \\|1s22s3s> states of the beryllium atom, it is demonstrated that the full computation with 1323 CSF's can converge only if the zeroth-order energy of the \\|1s22s3s> Rydberg state from the orthogonal space lies below the zeroth-order energy of the \\|1s22p2> CSF from the reference space. Thus convergence in this case requires a zeroth-order spectral overlap between the orthogonal and reference spaces. The FD partitioning is not capable of generating this type of spectral overlap and thus yields a divergent expansion. However, the expansion is actually asymptotically convergent, with divergent behavior not displayed until the 11th order because the \\|1s22s3s> Rydberg state is only weakly coupled with the \\|1s22p2> CSF and because these states are energetically well separated in zeroth order. The OPT partitioning chooses the correct zeroth-order energy ordering and thus yields a convergent expansion that is also very accurate in low orders compared to the exact solution within the basis.
Structural approaches to spin glasses and optimization problems
NASA Astrophysics Data System (ADS)
de Sanctis, Luca
We introduce the concept of Random Multi-Overlap Structure (RaMOSt) as a generalization of the one introduced by M. Aizenman et al. for non-diluted spin glasses. We use this concept to find generalized bounds for the free energy of the Viana-Bray model of diluted spin glasses and to formulate and prove the Extended Variational Principle that implicitly provides the free energy of the model. Then we exhibit a theorem for the limiting RaMOSt, analogous to the one found by F. Guerra for the Sherrington-Kirkpatrick model, that describes some stability properties of the model. We also show how our technique can be used to prove the existence of the thermodynamic limit of the free energy. We then propose an ultrametric breaking of replica symmetry for diluted spin glasses in the framework of Random Multi-Overlap Structures (RaMOSt). Such a proposal is closer to the Parisi theory for non-diluted spin glasses than the theory based on the iterative approach. Our approach allows to formulate an ansatz in which the Broken Replica Symmetry trial function depends on a set of numbers, over which one has to take the infimum (as opposed to a nested chain of probabilty distributions). Our scheme suggests that the order parameter is determined by the probability distribution of the multi-overlap in a similar sense as in the non-diluted case, and it is not necessarily a functional. Such results are then extended to the K-SAT and p-XOR-SAT optimization problems, and to the spherical mean field spin glass. The ultrametric structure exhibits a factorization property similar to the one of the optimal structures for the Viana-Bray model. The present work paves the way to a revisited Parisi theory for diluted spin systems. Moreover, it emphasizes some structural analogies among different models, which also seem to be plausible for models that still escape good mathematical control. This structural analysis seems quite promising both mathematically and physically.
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jarosław J.
2011-09-01
Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.
Calculation of a double reactive azeotrope using stochastic optimization approaches
NASA Astrophysics Data System (ADS)
Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus
2013-02-01
An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.
Optimizing denominator data estimation through a multimodel approach.
Bryssinckx, Ward; Ducheyne, Els; Leirs, Herwig; Hendrickx, Guy
2014-05-01
To assess the risk of (zoonotic) disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and "true" livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively). This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources. PMID:24893035
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact
NASA Astrophysics Data System (ADS)
Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo
To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.
Gu, Haiwei; Huang, Yuan; Carr, Peter W.
2010-01-01
In this work we develop a practical approach to optimization in comprehensive two dimensional liquid chromatography (LC×LC) which incorporates the important under-sampling correction and is based on the previously developed gradient implementation of the Poppe approach to optimizing peak capacity. The Poppe method allows the determination of the column length, flow rate as well as initial and final eluent compositions that maximize the peak capacity at a given gradient time. It was assumed that gradient elution is applied in both dimensions and that various practical constraints are imposed on both the initial and final mobile phase composition in the first dimension separation. It was convenient to consider four different classes of solute sets differing in their retention properties. The major finding of this study is that the under-sampling effect is very important and causes some unexpected results including the important counter-intuitive observation that under certain conditions the optimum effective LC×LC peak capacity is obtained when the first dimension is deliberately run under sub-optimal conditions. PMID:21145554
NASA Astrophysics Data System (ADS)
Ibanez, Eduardo
Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.
Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.
Verdaguer, Marta; Molinos-Senante, María; Poch, Manel
2016-04-01
Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. PMID:26868846
Towards an Optimal Multi-Method Paleointensity Approach
NASA Astrophysics Data System (ADS)
de Groot, L. V.; Biggin, A. J.; Langereis, C. G.; Dekkers, M. J.
2014-12-01
Our recently proposed 'multi-method paleointensity approach' consists of at least IZZI-Thellier, MSP-DSC and pseudo-Thellier experiments, complemented with Microwave Thellier experiments for key flows or ages. All results are scrutinized by strict selection criteria to accept only the most reliable paleointensities. This approach yielded reliable estimates of the paleofield for ~70% of all cooling units sampled on Hawaii - an exceptionally high number for a paleointensity study on lavas. Furthermore the credibility of the obtained results is greatly enhanced if more methods mutually agree with in their experimental uncertainties. To further assess the success rate of this new approach, we applied it to two collections of (sub-)recent lavas from Tenerife and Gran Canaria (20 cooling units), and Terceira (Azores, 18 cooling units). Although the mineralogy and rock-magnetic properties of much of these flows seemed less favorable for paleointensity techniques compared to the Hawaiian samples, again the multi-method paleointensity approach yielded reliable estimates for 60-70% of all cooling units. One of the methods, the newly calibrated pseudo-Thellier method, proved to be an important element of our new paleointensity approach yielding reliable estimates for ~50% of the Hawaiian lavas sampled. Its applicability to other volcanic edifices, however, remained questionable. The results from the Canarian and Azorean volcanic edifices provide further constraints on this method's potential. For lavas that are rock-magnetically (i.e. susceptibility-vs-temperature behavior) akin to Hawaiian lavas, the same selection criterion and calibration formula yielded successful results - testifying to the veracity of this new paleointensity method. Besides methodological advances our new record for the Canary Islands also has geomagnetic implications. It reveals a dramatic increase in the intensity of the Earth's magnetic field from ~1250 to ~720 BC, reaching a maximum VADM of ~125 ZAm
Papaneophytou, Christos; Kontopidis, George
2016-04-01
During a discovery project of potential inhibitors for three proteins, TNF-α, RANKL and HO-1, implicated in the pathogenesis of rheumatoid arthritis, significant amounts of purified proteins were required. The application of statistically designed experiments for screening and optimization of induction conditions allows rapid identification of the important factors and interactions between them. We have previously used response surface methodology (RSM) for the optimization of soluble expression of TNF-α and RANKL. In this work, we initially applied RSM for the optimization of recombinant HO-1 and a 91% increase of protein production was achieved. Subsequently, we slightly modified a published incomplete factorial approach (called IF1) in order to evaluate the effect of three expression variables (bacterial strains, induction temperatures and culture media) on soluble expression levels of the three tested proteins. However, soluble expression yields of TNF-α and RANKL obtained by the IF1 method were significantly lower (<50%) than those obtained by RSM. We further modified the IF1 approach by replacing the culture media with induction times and the resulted method called IF-STT (Incomplete Factorial-Stain/Temperature/Time) was validated using the three proteins. Interestingly, soluble expression levels of the three proteins obtained by IF-STT were only 1.2-fold lower than those obtained by RSM. Although RSM is probably the best approach for optimization of biological processes, the IF-STT is faster, it examines the most important factors (bacterial strain, temperature and time) influencing protein soluble expression in a single experiment, and can be used in any recombinant protein expression project as a starting point. PMID:26721705
Optimization of rifamycin B fermentation in shake flasks via a machine-learning-based approach.
Bapat, Prashant M; Wangikar, Pramod P
2004-04-20
Rifamycin B is an important polyketide antibiotic used in the treatment of tuberculosis and leprosy. We present results on medium optimization for Rifamycin B production via a barbital insensitive mutant strain of Amycolatopsis mediterranei S699. Machine-learning approaches such as Genetic algorithm (GA), Neighborhood analysis (NA) and Decision Tree technique (DT) were explored for optimizing the medium composition. Genetic algorithm was applied as a global search algorithm while NA was used for a guided local search and to develop medium predictors. The fermentation medium for Rifamycin B consisted of nine components. A large number of distinct medium compositions are possible by variation of concentration of each component. This presents a large combinatorial search space. Optimization was achieved within five generations via GA as well as NA. These five generations consisted of 178 shake-flask experiments, which is a small fraction of the search space. We detected multiple optima in the form of 11 distinct medium combinations. These medium combinations provided over 600% improvement in Rifamycin B productivity. Genetic algorithm performed better in optimizing fermentation medium as compared to NA. The Decision Tree technique revealed the media-media interactions qualitatively in the form of sets of rules for medium composition that give high as well as low productivity. PMID:15052640
On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal
2011-05-01
In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].
NASA Astrophysics Data System (ADS)
Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.
2006-09-01
A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.
New Approaches to HSCT Multidisciplinary Design and Optimization
NASA Technical Reports Server (NTRS)
Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh
1999-01-01
New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.
Optimization of convective fin systems: a holistic approach
NASA Astrophysics Data System (ADS)
Sasikumar, M.; Balaji, C.
A numerical analysis of natural convection heat transfer and entropy generation from an array of vertical fins, standing on a horizontal duct, with turbulent fluid flow inside, has been carried out. The analysis takes into account the variation of base temperature along the duct, traditionally ignored by most studies on such problems. One-dimensional fin equation is solved using a second order finite difference scheme for each of the fins in the system and this, in conjunction with the use of turbulent flow correlations for duct, is used to obtain the temperature distribution along the duct. The influence of the geometric and thermal parameters, which are normally employed in the design of a thermal system, has been studied. Correlations are developed for (i) the total heat transfer rate per unit mass of the fin system (ii) total entropy generation rate and (iii) fin height, as a function of the geometric parameters of the fin system. Optimal dimensions of the fin system for (i) maximum heat transfer rate per unit mass and (ii) minimum total entropy generation rate are obtained using Genetic Algorithm. As expected, these optima do not match. An approach to a `holistic' design that takes into account both these criteria has also been presented.
Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka
2014-01-01
The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.
Finite-basis correction applied to the optimized effective potential within the FLAPW method
NASA Astrophysics Data System (ADS)
Friedrich, Christoph; Betzinger, Markus; Blügel, Stefan
2011-03-01
The optimized-effective-potential (OEP) method is a special technique to construct local exchange-correlation (xc) potentials from general orbital-dependent xc energy functionals for density-functional theory. Recently, we showed that particular care must be taken to construct local potentials within the all-electron full-potential augmented-plane-wave (FLAPW) approach. In fact, we found that the LAPW basis had to be converged to an accuracy that was far beyond that in calculations using conventional functionals, leading to a very high computational cost. This could be traced back to the convergence behavior of the density response function: only a highly converged basis lends the density enough flexibility to react adequately to changes of the potential. In this work we derive a numerical correction for the response function, which vanishes in the limit of an infinite, complete basis. It is constructed in the atomic spheres from the response of the basis functions themselves to changes of the potential. We show that such a finite-basis correction reduces the computational demand of OEP calculations considerably. We also discuss a similar correction scheme for GW calculations.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
NASA Astrophysics Data System (ADS)
Li, Ru; Yue, Yuemin
2015-08-01
The high spatial heterogeneity forms a major uncertainty in accurately monitoring of vegetation coverage. In this study, an optimal zoning approach with dividing the whole heterogeneous image into relatively homogeneously segments was proposed to reduce the effects of high heterogeneity on vegetation coverage estimation. With the combination of the spectral similarity of the adjacent pixels and spatial autocorrelation of the segments, the optimal zoning approach accounted for the intrasegment uniformity and intersegment disparity of improved image segmentation. In comparison, vegetation coverage in the highly heterogeneous karst environments tended to be underestimated by the normalized difference vegetation index (NDVI) and overestimated by the normalized difference vegetation index-spectral mixture analysis (NDVI-SMA) model. Hence, when applying remote sensing for highly heterogeneous environments, the influence of high heterogeneity should not be ignored. Our study indicates that the proposed model, using NDVI-SMA model with improved segmentation, is found to ameliorate the effects of the highly heterogeneous environments on the extraction of vegetation coverage from hyperspectral imagery. The proposed approach is useful for obtaining accurate estimations of vegetation coverage in not only karst environments but also other environments with high heterogeneity.
Larson, Kyle B.; Tagestad, Jerry D.; Perkins, Casey J.; Oster, Matthew R.; Warwick, M.; Geerlofs, Simon H.
2015-09-01
This study was conducted with the support of the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office (WWPTO) as part of ongoing efforts to minimize key risks and reduce the cost and time associated with permitting and deploying ocean renewable energy. The focus of the study was to discuss a possible approach to exploring scenarios for ocean renewable energy development in Hawaii that attempts to optimize future development based on technical, economic, and policy criteria. The goal of the study was not to identify potentially suitable or feasible locations for development, but to discuss how such an approach may be developed for a given offshore area. Hawaii was selected for this case study due to the complex nature of the energy climate there and DOE’s ongoing involvement to support marine spatial planning for the West Coast. Primary objectives of the study included 1) discussing the political and economic context for ocean renewable energy development in Hawaii, especially with respect to how inter-island transmission may affect the future of renewable energy development in Hawaii; 2) applying a Geographic Information System (GIS) approach that has been used to assess the technical suitability of offshore renewable energy technologies in Washington, Oregon, and California, to Hawaii’s offshore environment; and 3) formulate a mathematical model for exploring scenarios for ocean renewable energy development in Hawaii that seeks to optimize technical and economic suitability within the context of Hawaii’s existing energy policy and planning.
NASA Astrophysics Data System (ADS)
Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.
2016-02-01
This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.