Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems
2013-11-01
development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
2015-08-18
by Benjamin Bederson and Herbert Walther, pp. 95 –170. issn: 1049-250X. doi: 10.1016/S1049-250X(08)60186-X. [49] Nicolas Schlosser, Georges Reymond...144 9.5.1.2 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 145 9.5.1.3 Nelder- Mead Simplex...from 2013-11-14-15-18-54. 90 Nelder- Mead optimizing readout frequency, power and time Readout frequency Iterations Python Controller Arroyo TEC and
NASA Astrophysics Data System (ADS)
Sudhakar, N.; Rajasekar, N.; Akhil, Saya; Jyotheeswara Reddy, K.
2017-11-01
The boost converter is the most desirable DC-DC power converter for renewable energy applications for its favorable continuous input current characteristics. In other hand, these DC-DC converters known as practical nonlinear systems are prone to several types of nonlinear phenomena including bifurcation, quasiperiodicity, intermittency and chaos. These undesirable effects has to be controlled for maintaining normal periodic operation of the converter and to ensure the stability. This paper presents an effective solution to control the chaos in solar fed DC-DC boost converter since the converter experiences wide range of input power variation which leads to chaotic phenomena. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Nelder-Mead Enhanced Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The results are compared with the traditional methods. The obtained results of the proposed system ensures the operation of the converter within the controllable region.
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Numerical simulations of detonation propagation in gaseous fuel-air mixtures
NASA Astrophysics Data System (ADS)
Honhar, Praveen; Kaplan, Carolyn; Houim, Ryan; Oran, Elaine
2017-11-01
Unsteady multidimensional numerical simulations of detonation propagation and survival in mixtures of fuel (hydrogen or methane) diluted with air were carried out with a fully compressible Navier-Stokes solver using a simplified chemical-diffusive model (CDM). The CDM was derived using a genetic algorithm combined with the Nelder-Mead optimization algorithm and reproduces physically correct laminar flame and detonation properties. Cases studied are overdriven detonations propagating through confined mediums, with or without gradients in composition. Results from simulations confirm that the survival of the detonation depends on the channel heights. In addition, the simulations show that the propagation of the detonation waves depends on the steepness in composition gradients.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Automated optimization of water-water interaction parameters for a coarse-grained model.
Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A
2014-02-13
We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment.
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
Dictionary Indexing of Electron Channeling Patterns.
Singh, Saransh; De Graef, Marc
2017-02-01
The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.
MONSS: A multi-objective nonlinear simplex search approach
NASA Astrophysics Data System (ADS)
Zapotecas-Martínez, Saúl; Coello Coello, Carlos A.
2016-01-01
This article presents a novel methodology for dealing with continuous box-constrained multi-objective optimization problems (MOPs). The proposed algorithm adopts a nonlinear simplex search scheme in order to obtain multiple elements of the Pareto optimal set. The search is directed by a well-distributed set of weight vectors, each of which defines a scalarization problem that is solved by deforming a simplex according to the movements described by Nelder and Mead's method. Considering an MOP with n decision variables, the simplex is constructed using n+1 solutions which minimize different scalarization problems defined by n+1 neighbor weight vectors. All solutions found in the search are used to update a set of solutions considered to be the minima for each separate problem. In this way, the proposed algorithm collectively obtains multiple trade-offs among the different conflicting objectives, while maintaining a proper representation of the Pareto optimal front. In this article, it is shown that a well-designed strategy using just mathematical programming techniques can be competitive with respect to the state-of-the-art multi-objective evolutionary algorithms against which it was compared.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
NASA Astrophysics Data System (ADS)
Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.
2015-03-01
A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.
Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method
NASA Astrophysics Data System (ADS)
Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu
2017-10-01
Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.
Numerical Optimization Strategy for Determining 3D Flow Fields in Microfluidics
NASA Astrophysics Data System (ADS)
Eden, Alex; Sigurdson, Marin; Mezic, Igor; Meinhart, Carl
2015-11-01
We present a hybrid experimental-numerical method for generating 3D flow fields from 2D PIV experimental data. An optimization algorithm is applied to a theory-based simulation of an alternating current electrothermal (ACET) micromixer in conjunction with 2D PIV data to generate an improved representation of 3D steady state flow conditions. These results can be used to investigate mixing phenomena. Experimental conditions were simulated using COMSOL Multiphysics to solve the temperature and velocity fields, as well as the quasi-static electric fields. The governing equations were based on a theoretical model for ac electrothermal flows. A Nelder-Mead optimization algorithm was used to achieve a better fit by minimizing the error between 2D PIV experimental velocity data and numerical simulation results at the measurement plane. By applying this hybrid method, the normalized RMS velocity error between the simulation and experimental results was reduced by more than an order of magnitude. The optimization algorithm altered 3D fluid circulation patterns considerably, providing a more accurate representation of the 3D experimental flow field. This method can be generalized to a wide variety of flow problems. This research was supported by the Institute for Collaborative Biotechnologies through grant W911NF-09-0001 from the U.S. Army Research Office.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
NASA Astrophysics Data System (ADS)
Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung
2018-02-01
Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.
Simulations of Flame Acceleration and DDT in Mixture Composition Gradients
NASA Astrophysics Data System (ADS)
Zheng, Weilin; Kaplan, Carolyn; Houim, Ryan; Oran, Elaine
2017-11-01
Unsteady, multidimensional, fully compressible numerical simulations of methane-air in an obstructed channel with spatial gradients in equivalence ratios have been carried to determine the effects of the gradients on flame acceleration and transition to detonation. Results for gradients perpendicular to the propagation direction were considered here. A calibrated, optimized chemical-diffusive model that reproduces correct flame and detonation properties for methane-air over a range of equivalence ratios was derived from a combination of a genetic algorithm with a Nelder-Mead optimization scheme. Inhomogeneous mixtures of methane-air resulted in slower flame acceleration and longer distance to DDT. Detonations were more likely to decouple into a flame and a shock under sharper concentration gradients. Detailed analyses of temperature and equivalence ratio illustrated that vertical gradients can greatly affect the formation of hot spots that initiate detonation by changing the strength of leading shock wave and local equivalence ratio near the base of obstacles. This work is supported by the Alpha Foundation (Grant No. AFC215-20).
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Determining Spacecraft Reaction Wheel Friction Parameters
NASA Technical Reports Server (NTRS)
Sarani, Siamak
2009-01-01
Software was developed to characterize the drag in each of the Cassini spacecraft's Reaction Wheel Assemblies (RWAs) to determine the RWA friction parameters. This tool measures the drag torque of RWAs for not only the high spin rates (greater than 250 RPM), but also the low spin rates (less than 250 RPM) where there is a lack of an elastohydrodynamic boundary layer in the bearings. RWA rate and drag torque profiles as functions of time are collected via telemetry once every 4 seconds and once every 8 seconds, respectively. Intermediate processing steps single-out the coast-down regions. A nonlinear model for the drag torque as a function of RWA spin rate is incorporated in order to characterize the low spin rate regime. The tool then uses a nonlinear parameter optimization algorithm based on the Nelder-Mead simplex method to determine the viscous coefficient, the Dahl friction, and the two parameters that account for the low spin-rate behavior.
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Detection of Fiber Layer-Up Lamination Order of CFRP Composite Using Thermal-Wave Radar Imaging
NASA Astrophysics Data System (ADS)
Wang, Fei; Liu, Junyan; Liu, Yang; Wang, Yang; Gong, Jinlong
2016-09-01
In this paper, thermal-wave radar imaging (TWRI) is used as a nondestructive inspection method to evaluate carbon-fiber-reinforced-polymer (CFRP) composite. An inverse methodology that combines TWRI with numerical optimization technique is proposed to determine the fiber layer-up lamination sequences of anisotropic CFRP composite. A 7-layer CFRP laminate [0°/45°/90°/0°]_{{s}} is heated by a chirp-modulated Gaussian laser beam, and then finite element method (FEM) is employed to calculate the temperature field of CFRP laminates. The phase based on lock-in correlation between reference chirp signal and the thermal-wave signal is performed to obtain the phase image of TWRI, and the least square method is applied to reconstruct the cost function that minimizes the square of the difference between the phase of TWRI inspection and numerical calculation. A hybrid algorithm that combines the simulation annealing with Nelder-Mead simplex research method is employed to solve the reconstructed cost function and find the global optimal solution of the layer-up sequences of CFRP composite. The result shows the feasibility of estimating the fiber layer-up lamination sequences of CFRP composite with optimal discrete and constraint conditions.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry
NASA Astrophysics Data System (ADS)
Serafini, S.; Paone, N.; Castellini, P.
2013-12-01
A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.
Testing calibration routines for LISFLOOD, a distributed hydrological model
NASA Astrophysics Data System (ADS)
Pannemans, B.
2009-04-01
Traditionally hydrological models are considered as difficult to calibrate: their highly non-linearity results in rugged and rough response surfaces were calibration algorithms easily get stuck in local minima. For the calibration of distributed hydrological models two extra factors play an important role: on the one hand they are often costly on computation, thus restricting the feasible number of model runs; on the other hand their distributed nature smooths the response surface, thus facilitating the search for a global minimum. Lisflood is a distributed hydrological model currently used for the European Flood Alert System - EFAS (Van der Knijff et al, 2008). Its upcoming recalibration over more then 200 catchments, each with an average runtime of 2-3 minutes, proved a perfect occasion to put several existing calibration algorithms to the test. The tested routines are Downhill Simplex (DHS, Nelder and Mead, 1965), SCEUA (Duan et Al. 1993), SCEM (Vrugt et al., 2003) and AMALGAM (Vrugt et al., 2008), and they were evaluated on their capability to efficiently converge onto the global minimum and on the spread in the found solutions in repeated runs. The routines were let loose on a simple hyperbolic function, on a Lisflood catchment using model output as observation, and on two Lisflood catchments using real observations (one on the river Inn in the Alps, the other along the downstream stretch of the Elbe). On the mathematical problem and on the catchment with synthetic observations DHS proved to be the fastest and the most efficient in finding a solution. SCEUA and AMALGAM are a slower, but while SCEUA keeps converging on the exact solution, AMALGAM slows down after about 600 runs. For the Lisflood models with real-time observations AMALGAM (hybrid algorithm that combines several other algorithms, we used CMA, PSO and GA) came as fastest out of the tests, and giving comparable results in consecutive runs. However, some more work is needed to tweak the stopping criteria. SCEUA is a bit slower, but has very transparent stopping rules. Both have closed in on the minima after about 600 runs. DHS equals only SCEUA on convergence speed. The stopping criteria we applied so far are too strict, causing it to stop too early. SCEM converges 5-6 times slower. This is a high price for the parameter uncertainty analysis that is simultaneously done. The ease with which all algorithms find the same optimum suggests that we are dealing with a smooth and relatively simple response surface. This leaves room for other deterministic calibration algorithms being smarter than DHS in sliding downhill. PEST seems promising but sofar we haven't managed to get it running with LISFLOOD. • Duan, Q.; Gupta, V. & Sorooshian, S., 1993, Shuffled complex evolution approach for effective and efficient global minimization, J Optim Theory Appl, Kluwer Academic Publishers-Plenum Publishers, 76, 501-521 • Nelder, J. & Mead, R., 1965, A simplex method for function minimization, Comput. J., 7, 308-313 • Van Der Knijff, J. M.; Younis, J. & De Roo, A. P. J., 2008, LISFLOOD: a GIS-based distributed model for river basin scale water balance and flood simulation, International Journal of Geographical Information Science, • Vrugt, J.; Gupta, H.; Bouten, W. & Sorooshian, S., 2003, A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters, Water Resour. Res., 39 • Vrugt, J.; Robinson, B. & Hyman, J., 2008, Self-Adaptive Multimethod Search for Global Optimization in Real-Parameter Spaces, IEEE Trans Evol Comput, IEEE,
Optimization of design parameters of low-energy buildings
NASA Astrophysics Data System (ADS)
Vala, Jiří; Jarošová, Petra
2017-07-01
Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
Restless Tuneup of High-Fidelity Qubit Gates
NASA Astrophysics Data System (ADS)
Rol, M. A.; Bultink, C. C.; O'Brien, T. E.; de Jong, S. R.; Theis, L. S.; Fu, X.; Luthi, F.; Vermeulen, R. F. L.; de Sterke, J. C.; Bruno, A.; Deurloo, D.; Schouten, R. N.; Wilhelm, F. K.; DiCarlo, L.
2017-04-01
We present a tuneup protocol for qubit gates with tenfold speedup over traditional methods reliant on qubit initialization by energy relaxation. This speedup is achieved by constructing a cost function for Nelder-Mead optimization from real-time correlation of nondemolition measurements interleaving gate operations without pause. Applying the protocol on a transmon qubit achieves 0.999 average Clifford fidelity in one minute, as independently verified using randomized benchmarking and gate-set tomography. The adjustable sensitivity of the cost function allows the detection of fractional changes in the gate error with a nearly constant signal-to-noise ratio. The restless concept demonstrated can be readily extended to the tuneup of two-qubit gates and measurement operations.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
Restless Tuneup of High-Fidelity Qubit Gates
NASA Astrophysics Data System (ADS)
Rol, M. A.; Bultink, C. C.; O'Brien, T. E.; de Jong, S. R.; Theis, L. S.; Fu, X.; Luthi, F.; Vermeulen, R. F. L.; de Sterke, J. C.; Bruno, A.; Deurloo, D.; Schouten, R. N.; Wilhelm, F. K.; Dicarlo, L.
We present a tuneup protocol for qubit gates with tenfold speedup over traditional methods reliant on qubit initialization by energy relax- ation. This speedup is achieved by constructing a cost function for Nelder-Mead optimization from real-time correlation of non-demolition measurements interleaving gate operations without pause. Applying the protocol on a transmon qubit achieves 0.999 average Clifford fidelity in one minute, as independently verified using randomized benchmarking and gate set tomography. The adjustable sensitivity of the cost function allows detecting fractional reductions in gate error with constant signal- to-noise ratio. The restless concept here demonstrated can be readily extended to the tuneup of two-qubit gates and measurement operations. Research funded by IARPA, an ERC Synergy Grant, Microsoft Research, and the China Scholarship Council.
Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails
NASA Astrophysics Data System (ADS)
Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.
2017-02-01
An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.
NASA Astrophysics Data System (ADS)
Silva, Guilherme Augusto Lopes da; Nicoletti, Rodrigo
2017-06-01
This work focuses on the placement of natural frequencies of beams to desired frequency regions. More specifically, we investigate the effects of combining mode shapes to shape a beam to change its natural frequencies, both numerically and experimentally. First, we present a parametric analysis of a shaped beam and we analyze the resultant effects for different boundary conditions and mode shapes. Second, we present an optimization procedure to find the optimum shape of the beam for desired natural frequencies. In this case, we adopt the Nelder-Mead simplex search method, which allows a broad search of the optimum shape in the solution domain. Finally, the obtained results are verified experimentally for a clamped-clamped beam in three different optimization runs. Results show that the method is effective in placing natural frequencies at desired values (experimental results lie within a 10% error to the expected theoretical ones). However, the beam must be axially constrained to have the natural frequencies changed.
Asteroid mass estimation using Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-11-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
NASA Astrophysics Data System (ADS)
Ren, Jiyun; Menon, Geetha; Sloboda, Ron
2013-04-01
Although the Manchester system is still extensively used to prescribe dose in brachytherapy (BT) for locally advanced cervix cancer, many radiation oncology centers are transitioning to 3D image-guided BT, owing to the excellent anatomy definition offered by modern imaging modalities. As automatic dose optimization is highly desirable for 3D image-based BT, this study comparatively evaluates the performance of two optimization methods used in BT treatment planning—Nelder-Mead simplex (NMS) and simulated annealing (SA)—for a cervix BT computer simulation model incorporating a Manchester-style applicator. Eight model cases were constructed based on anatomical structure data (for high risk-clinical target volume (HR-CTV), bladder, rectum and sigmoid) obtained from measurements on fused MR-CT images for BT patients. D90 and V100 for HR-CTV, D2cc for organs at risk (OARs), dose to point A, conformation index and the sum of dwell times within the tandem and ovoids were calculated for optimized treatment plans designed to treat the HR-CTV in a highly conformal manner. Compared to the NMS algorithm, SA was found to be superior as it could perform optimization starting from a range of initial dwell times, while the performance of NMS was strongly dependent on their initial choice. SA-optimized plans also exhibited lower D2cc to OARs, especially the bladder and sigmoid, and reduced tandem dwell times. For cases with smaller HR-CTV having good separation from adjoining OARs, multiple SA-optimized solutions were found which differed markedly from each other and were associated with different choices for initial dwell times. Finally and importantly, the SA method yielded plans with lower dwell time variability compared with the NMS method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz; Babak, Stanislav; Krolak, Andrzej
We present data analysis methods used in the detection and estimation of parameters of gravitational-wave signals from the white dwarf binaries in the mock LISA data challenge. Our main focus is on the analysis of challenge 3.1, where the gravitational-wave signals from more than 6x10{sup 7} Galactic binaries were added to the simulated Gaussian instrumental noise. The majority of the signals at low frequencies are not resolved individually. The confusion between the signals is strongly reduced at frequencies above 5 mHz. Our basic data analysis procedure is the maximum likelihood detection method. We filter the data through the template bankmore » at the first step of the search, then we refine parameters using the Nelder-Mead algorithm, we remove the strongest signal found and we repeat the procedure. We detect reliably and estimate parameters accurately of more than ten thousand signals from white dwarf binaries.« less
A Simulation Optimization Approach to Epidemic Forecasting
Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.
2013-01-01
Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222
A Simulation Optimization Approach to Epidemic Forecasting.
Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V
2013-01-01
Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area.
Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2017-01-01
Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm 3 density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.
2011-03-01
The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2003-01-01
In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.
Asteroid mass estimation using Markov-Chain Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2016-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-02-21
A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.
An adaptable, low cost test-bed for unmanned vehicle systems research
NASA Astrophysics Data System (ADS)
Goppert, James M.
2011-12-01
An unmanned vehicle systems test-bed has been developed. The test-bed has been designed to accommodate hardware changes and various vehicle types and algorithms. The creation of this test-bed allows research teams to focus on algorithm development and employ a common well-tested experimental framework. The ArduPilotOne autopilot was developed to provide the necessary level of abstraction for multiple vehicle types. The autopilot was also designed to be highly integrated with the Mavlink protocol for Micro Air Vehicle (MAV) communication. Mavlink is the native protocol for QGroundControl, a MAV ground control program. Features were added to QGroundControl to accommodate outdoor usage. Next, the Mavsim toolbox was developed for Scicoslab to allow hardware-in-the-loop testing, control design and analysis, and estimation algorithm testing and verification. In order to obtain linear models of aircraft dynamics, the JSBSim flight dynamics engine was extended to use a probabilistic Nelder-Mead simplex method. The JSBSim aircraft dynamics were compared with wind-tunnel data collected. Finally, a structured methodology for successive loop closure control design is proposed. This methodology is demonstrated along with the rest of the test-bed tools on a quadrotor, a fixed wing RC plane, and a ground vehicle. Test results for the ground vehicle are presented.
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
Highly Compact Circulators in Square-Lattice Photonic Crystal Waveguides
Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing
2014-01-01
We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz. PMID:25415417
Highly compact circulators in square-lattice photonic crystal waveguides.
Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing
2014-01-01
We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz.
The cancellous bone multiscale morphology-elasticity relationship.
Agić, Ante; Nikolić, Vasilije; Mijović, Budimir
2006-06-01
The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.
NASA Astrophysics Data System (ADS)
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
An approach to unbiased subsample interpolation for motion tracking.
McCormick, Matthew M; Varghese, Tomy
2013-04-01
Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
Determination of elastic moduli from measured acoustic velocities.
Brown, J Michael
2018-06-01
Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.
Determining index of refraction from polarimetric hyperspectral radiance measurements
NASA Astrophysics Data System (ADS)
Martin, Jacob A.; Gross, Kevin C.
2015-09-01
Polarimetric hyperspectral imaging (P-HSI) combines two of the most common remote sensing modalities. This work leverages the combination of these techniques to improve material classification. Classifying and identifying materials requires parameters which are invariant to changing viewing conditions, and most often a material's reflectivity or emissivity is used. Measuring these most often requires assumptions be made about the material and atmospheric conditions. Combining both polarimetric and hyperspectral imaging, we propose a method to remotely estimate the index of refraction of a material. In general, this is an underdetermined problem because both the real and imaginary components of index of refraction are unknown at every spectral point. By modeling the spectral variation of the index of refraction using a few parameters, however, the problem can be made overdetermined. A number of different functions can be used to describe this spectral variation, and some are discussed here. Reducing the number of spectral parameters to fit allows us to add parameters which estimate atmospheric downwelling radiance and transmittance. Additionally, the object temperature is added as a fit parameter. The set of these parameters that best replicate the measured data is then found using a bounded Nelder-Mead simplex search algorithm. Other search algorithms are also examined and discussed. Results show that this technique has promise but also some limitations, which are the subject of ongoing work.
Multi-species beam hardening calibration device for x-ray microtomography
NASA Astrophysics Data System (ADS)
Evershed, Anthony N. Z.; Mills, David; Davis, Graham
2012-10-01
Impact-source X-ray microtomography (XMT) is a widely-used benchtop alternative to synchrotron radiation microtomography. Since X-rays from a tube are polychromatic, however, greyscale `beam hardening' artefacts are produced by the preferential absorption of low-energy photons in the beam path. A multi-material `carousel' test piece was developed to offer a wider range of X-ray attenuations from well-characterised filters than single-material step wedges can produce practically, and optimization software was developed to produce a beam hardening correction by use of the Nelder-Mead optimization method, tuned for specimens composed of other materials (such as hydroxyapatite [HA] or barium for dental applications.) The carousel test piece produced calibration polynomials reliably and with a significantly smaller discrepancy between the calculated and measured attenuations than the calibration step wedge previously in use. An immersion tank was constructed and used to simplify multi-material samples in order to negate the beam hardening effect of low atomic number materials within the specimen when measuring mineral concentration of higher-Z regions. When scanned in water at an acceleration voltage of 90 kV a Scanco AG hydroxyapatite / poly(methyl methacrylate) calibration phantom closely approximates a single-material system, producing accurate hydroxyapatite concentration measurements. This system can then be corrected for beam hardening for the material of interest.
McElcheran, Clare E.; Yang, Benson; Anderson, Kevan J. T.; Golenstani-Rad, Laleh; Graham, Simon J.
2015-01-01
Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise. PMID:26237218
McElcheran, Clare E; Yang, Benson; Anderson, Kevan J T; Golenstani-Rad, Laleh; Graham, Simon J
2015-01-01
Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise.
NASA Astrophysics Data System (ADS)
McElcheran, Clare
Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In this thesis, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Three pTx coil configurations with 2-elements, 4-elements, and 8-elements, respectively, were investigated. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. Three cases were investigated to develop and evaluate this technique. First, a Proof-of-Concept study was performed to investigate the case of a simple, uniform cylindrical phantom with a straight, perfectly conducting wire. Second, a heterogeneous subject with bilateral, curved implanted wires was investigated. Finally, the third case investigated realistic patient lead-trajectories obtained from intra-operative CT scans. In all three cases, specific absorption rate (SAR), a metric used to quantify power deposition which results in heating, was reduced by over 90%. Maximal reduction in SAR was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. Although further research is required before clinical implementation, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise.
NASA Astrophysics Data System (ADS)
Buchanan, James L.; Gilbert, Robert P.; Ou, Miao-jung Y.
2011-12-01
Estimating the parameters of an elastic or poroelastic medium from reflected or transmitted acoustic data is an important but difficult problem. Use of the Nelder-Mead simplex method to minimize an objective function measuring the discrepancy between some observable and its value calculated from a model for a trial set of parameters has been tried by several authors. In this paper, the difficulty with this direct approach, which is the existence of numerous local minima of the objective function, is documented for the in vitro experiment in which a specimen in a water tank is subject to an ultrasonic pulse. An indirect approach, based on the numerical solution of the equations for a set of ‘effective’ velocities and transmission coefficients, is then observed empirically to ameliorate the difficulties posed by the direct approach.
NASA Astrophysics Data System (ADS)
Kanka, Jiri
2012-06-01
Fiber-optic long-period grating (LPG) operating near the dispersion turning point in its phase matching curve (PMC), referred to as a Turn Around Point (TAP) LPG, is known to be extremely sensitive to external parameters. Moreover, in a TAP LPG the phase matching condition can be almost satisfied over large spectral range, yielding a broadband LPG operation. TAP LPGs have been investigated, namely for use as broadband mode convertors and biosensors. So far TAP LPGs have been realized in specially designed or post-processed conventional fibers, not yet in PCFs, which allow a great degree of freedom in engineering the fiber's dispersion properties through the control of the PCF structural parameters. We have developed the design optimization technique for TAP PCF LPGs employing the finite element method for PCF modal analysis in a combination with the Nelder-Mead simplex method for minimizing the objective function based on target-specific PCF properties. Using this tool we have designed TAP PCF LPGs for specified wavelength ranges and refractive indices of medium in the air holes. Possible TAP PCF-LPG operational regimes - dual-resonance, broadband mode conversion and transmitted intensity-based operation - will be demonstrated numerically. Potential and limitations of TAP PCF-LPGs for evanescent chemical and biochemical sensing will be assessed.
Algorithm for retrieving vegetative canopy and leaf parameters from multi- and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Borel, Christoph
2009-05-01
In recent years hyper-spectral data has been used to retrieve information about vegetative canopies such as leaf area index and canopy water content. For the environmental scientist these two parameters are valuable, but there is potentially more information to be gained as high spatial resolution data becomes available. We developed an Amoeba (Nelder-Mead or Simplex) based program to invert a vegetative canopy radiosity model coupled with a leaf (PROSPECT5) reflectance model and modeled for the background reflectance (e.g. soil, water, leaf litter) to a measured reflectance spectrum. The PROSPECT5 leaf model has five parameters: leaf structure parameter Nstru, chlorophyll a+b concentration Cab, carotenoids content Car, equivalent water thickness Cw and dry matter content Cm. The canopy model has two parameters: total leaf area index (LAI) and number of layers. The background reflectance model is either a single reflectance spectrum from a spectral library() derived from a bare area pixel on an image or a linear mixture of soil spectra. We summarize the radiosity model of a layered canopy and give references to the leaf/needle models. The method is then tested on simulated and measured data. We investigate the uniqueness, limitations and accuracy of the retrieved parameters on canopy parameters (low, medium and high leaf area index) spectral resolution (32 to 211 band hyperspectral), sensor noise and initial conditions.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
NASA Astrophysics Data System (ADS)
Onken, U.; Rarey-Nies, J.; Gmehling, J.
1989-05-01
The Dortmund Data Bank (DDB) was started in 1973 with the intention to employ the vast store of vapor-liquid equilibrium (VLE) data from the literature for the development of models for the prediction of VLE. From the beginning, the structure of the DDB has been organized in such a way that it was possible to take advantage of the full potential of electronic computers. With the experience gained in fitting and processing VLE data, we extended the DDB system to other types of mixture properties, i.e., liquid-liquid equilibria (LLE), gas solubilities (GLE), activity coefficients at infinite dilution γ∞, heats of mixing ( h E), and excess heat capacities. Besides the files for mixture properties, the DDB contains pure-component data and program packages for various applications. New experimental data are checked for consistency before they are stored. For data retrieval user-specified search masks can be used. The data files are available via an online data service and through the Dechema Chemistry Data Series. For the purpose of data correlation and model testing, parameter fitting is performed with an optimization routine (Nelder-Mead). In the past years the DDB system has been successfully employed for the development of prediction methods for VLE, LLE, GLE, γ∞, and h E (UNIFAC, mod. UNIFAC, etc.).
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
NASA Technical Reports Server (NTRS)
Miller, J. Houston; Clarke, Greg B.; Melroy, Hilary; Ott, Lesley; Steel, Emily Wilson
2014-01-01
In a collaboration between NASA GSFC and GWU, a low-cost, surface instrument is being developed that can continuously monitor key carbon cycle gases in the atmospheric column: carbon dioxide (CO2) and methane (CH4). The instrument is based on a miniaturized, laser heterodyne radiometer (LHR) using near infrared (NIR) telecom lasers. Despite relatively weak absorption line strengths in this spectral region, spectrallyresolved atmospheric column absorptions for these two molecules fall in the range of 60-80% and thus sensitive and precise measurements of column concentrations are possible. In the last year, the instrument was deployed for field measurements at Park Falls, Wisconsin; Castle Airport near Atwater, California; and at the NOAA Mauna Loa Observatory in Hawaii. For each subsequent campaign, improvement in the figures of merit for the instrument has been observed. In the latest work the absorbance noise is approaching 0.002 optical density (OD) noise on a 1.8 OD signal. An overview of the measurement campaigns and the data retrieval algorithm for the calculation of column concentrations will be presented. For light transmission through the atmosphere, it is necessary to account for variation of pressure, temperature, composition, and refractive index through the atmosphere that are all functions of latitude, longitude, time of day, altitude, etc. For temperature, pressure, and humidity profiles with altitude we use the Modern-Era Retrospective Analysis for Research and Applications (MERRA) data. Spectral simulation is accomplished by integrating short-path segments along the trajectory using the SpecSyn spectral simulation suite developed at GW. Column concentrations are extracted by minimizing residuals between observed and modeled spectrum using the Nelder-Mead simplex algorithm. We will also present an assessment of uncertainty in the reported concentrations from assumptions made in the meteorological data, LHR instrument and tracker noise, and radio frequency bandwidth and describe additional future goals in instrument development and deployment target
NASA Astrophysics Data System (ADS)
Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus
2016-04-01
In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit or the best solutions for a given number of iterations are saved and used as a new input for the next model run. This procedure is repeated until the MSE undercuts a given threshold (e.g. the analytical error of the measurement data). For the second approach, different Matlab based derivative free optimization algorithms (DFOAs) (i.a. the Nelder-Mead Simplex Method, [Lagarias et al. (1998)]) are studied with an adaptation of the manual method of Kindler et al. (2013). For that the DFOAs are used to find those values for the temperature sensitivity and offset for calculating the surface temperature from the oxygen isotope records of the ice core water samples minimizing the MSE. Finally, a comparison to surface temperature records gained with different other methods for Glacial as well as Holocene data is planned. References: Broecker, W. S., Peteet, D., and Rind, D. (1985). Does the ocean-atmosphere system have more than one stable mode of operation? Nature, 315(6014):21-26. Dansgaard, W., Clausen, H., Gundestrup, N., Hammer, C., Johnsen, S., Gristinsdottir, P., and Reeh, N. (1982). A new Greenland deep ice core. Science, 218(4579):1273-1277. Johnsen, S. J., Clausen, H. B., Dansgaard, W., Fuhrer, K., Gundestrup, N., Hammer, C. U., Iversen, P., Jouzel, J., Stauffer, B., and Steffensen, J.P. (1992). Irregular glacial interstadials recorded in new Greenland ice core. Nature, 359:311- 313. Kindler, P., Guillevic, M., Baumgartner M., Schwander J., Landais A. and Leuenberger, M. (2013). NGRIP Temperature Reconstruction from 10 to 120 kyr b2k. Clim. Past, 9:4099-4143. Lagarias, J.C., Reeds, J. A., Wright, M. H., and Wright, P. E. (1998). Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147. Mariotti, A. (1983). Atmospheric nitrogen is a reliable standard for natural 15N abundance measurements. Nature, 303:685- 687. Rahmstorf, S. (2002). Ocean circulation and climate during the past 120,000 years. Nature,419(6903):207-214. Severinghaus, J. P., Sowers, T., Brook, E. J., Alley, R. B., and Bender, M. L. (1998). Timing of abrupt climate change at the end of the Younger Dryas interval from thermally fractionated gases in polar ice. Nature, 391:141-146. Schwander, J., Sowers, T., Barnola, J., Blunier, T., Fuchs, A., and Malaizé, B. (1997). Age scale of the air in the summit ice: implication for glacial-interglacial temperature change. J. Geophys. Res-Atmos., 102(D16):19483-19493.
Gough, S; Flynn, O; Hack, C J; Marchant, R
1996-09-01
The use of molasses as a substrate for ethanol production by the thermotolerant yeast Kluyveromyces marxianus var. marxianus was investigated at 45 degrees C. A maximum ethanol concentration of 7.4% (v/v) was produced from unsupplemented molasses at a concentration of 23% (v/v). The effect on ethanol production of increasing the sucrose concentration in 23% (v/v) molasses was determined. Increased sucrose concentration had a similar detrimental effect on the final ethanol produced as the increase in molasses concentration. This indicated that the effect may be due to increased osmotic activity as opposed to other components in the molasses. The optimum concentration of the supplements nitrogen, magnesium, potassium and fatty acid for maximum ethanol production rate was determined using the Nelder and Mead (Computer J 7:308-313, 1965) simplex optimisation method. The optimum concentration of the supplements were 0.576 g1(-1) magnesium sulphate, 0.288 g1(-1) potassium dihydrogen phosphate and 0.36% (v/v) linseed oil. Added nitrogen in the form of ammonium sulphate did not affect the ethanol production rate.
Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field
NASA Astrophysics Data System (ADS)
Triana, S. A.; Corsaro, E.; De Ridder, J.; Bonanno, A.; Pérez Hernández, F.; García, R. A.
2017-06-01
Context. The Kepler space telescope has provided time series of red giants of such unprecedented quality that a detailed asteroseismic analysis becomes possible. For a limited set of about a dozen red giants, the observed oscillation frequencies obtained by peak-bagging together with the most recent pulsation codes allowed us to reliably determine the core/envelope rotation ratio. The results so far show that the current models are unable to reproduce the rotation ratios, predicting higher values than what is observed and thus indicating that an efficient angular momentum transport mechanism should be at work. Here we provide an asteroseismic analysis of a sample of 13 low-luminosity low-mass red giant stars observed by Kepler during its first nominal mission. These targets form a subsample of the 19 red giants studied previously, which not only have a large number of extracted oscillation frequencies, but also unambiguous mode identifications. Aims: We aim to extend the sample of red giants for which internal rotation ratios obtained by theoretical modeling of peak-bagged frequencies are available. We also derive the rotation ratios using different methods, and compare the results of these methods with each other. Methods: We built seismic models using a grid search combined with a Nelder-Mead simplex algorithm and obtained rotation averages employing Bayesian inference and inversion methods. We compared these averages with those obtained using a previously developed model-independent method. Results: We find that the cores of the red giants in this sample are rotating 5 to 10 times faster than their envelopes, which is consistent with earlier results. The rotation rates computed from the different methods show good agreement for some targets, while some discrepancies exist for others.
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
Regularization of nonlinear decomposition of spectral x-ray projection images.
Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise
2017-09-01
Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.
Generating High-Resolution Lake Bathymetry over Lake Mead using the ICESat-2 Airborne Simulator
NASA Astrophysics Data System (ADS)
Li, Y.; Gao, H.; Jasinski, M. F.; Zhang, S.; Stoll, J.
2017-12-01
Precise lake bathymetry (i.e., elevation/contour) mapping is essential for optimal decision making in water resources management. Although the advancement of remote sensing has made it possible to monitor global reservoirs from space, most of the existing studies focus on estimating the elevation, area, and storage of reservoirs—and not on estimating the bathymetry. This limitation is attributed to the low spatial resolution of satellite altimeters. With the significant enhancement of ICESat-2—the Ice, Cloud & Land Elevation Satellite #2, which is scheduled to launch in 2018—producing satellite-based bathymetry becomes feasible. Here we present a pilot study for deriving the bathymetry of Lake Mead by combining Landsat area estimations with airborne elevation data using the prototype of ICESat-2—the Multiple Altimeter Beam Experimental Lidar (MABEL). First, an ISODATA classifier was adopted to extract the lake area from Landsat images during the period from 1982 to 2017. Then the lake area classifications were paired with MABEL elevations to establish an Area-Elevation (AE) relationship, which in turn was applied to the classification contour map to obtain the bathymetry. Finally, the Lake Mead bathymetry image was embedded onto the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM), to replace the existing constant values. Validation against sediment survey data indicates that the bathymetry derived from this study is reliable. This algorithm has the potential for generating global lake bathymetry when ICESat-2 data become available after next year's launch.
Kahoun, David; Rezková, Sona; Veskrnová, Katerina; Královský, Josef; Holcapek, Michal
2008-08-15
The objective of this study was the determination of 25 phenolic compounds in different mead samples (honeywines) using high performance liquid chromatography (HPLC) with coulometric-array detection and in case of hydroxymethylfurfural with UV detection. Our method was optimized in respect to both the separation selectivity of individual phenolic compounds and the maximum sensitivity with the electrochemical detection. The method development included the optimization of mobile phase composition, the pH value, conditions of the gradient elution and the flow rate using a window-diagram approach. The developed method was used for the determination of limits of detection and limits of quantitation for individual compounds. The linearity of calibration curves, accuracy and precision (intra- and inter-day) at three concentration levels (low, middle and high concentration range) were verified. Mead samples were diluted with the mobile phase at 1:1 to 1:50 ratio depending on the concentration and filtered through a PTFE filter without any other sample pre-treatment. Phenolic compounds concentration was determined in 50 real samples of meads and correlated with meads composition and hydroxymethylfurfural concentration. The most frequently occurred compounds were protocatechuic acid and vanillic acid (both of them were present in 98% samples), the least occurred compounds were (+)-catechin (10% samples) and sinapic acid (12% samples). Vanillin and ethylvanillin, which are used as artificial additives for the taste improvement, were found in 60% and 42% samples, respectively. Hydroxymethylfurfural concentration, as an indicator of honey quality, was in the range from 2.47 to 158 mg/L. Our method is applicable for the determination of 25 phenolic compounds in mead, honey and related natural samples.
Use of «MLCM3» software for flash flood forecasting
NASA Astrophysics Data System (ADS)
Sokolova, Daria; Kuzmin, Vadim
2017-04-01
Accurate and timely flash floods forecasting, especially, in ungauged and poorly gauged basins, is one of the most important and challenging problems to be solved by the international hydrological community.In changing climate and variable anthropogenic impact on river basins, as well as due to low density of surface hydrometeorological network, flash flood forecasting based on "traditional" physically based, or conceptual, or statistical hydrological models often becomes inefficient. Unfortunately, most of river basins in Russia are poorly gauged or ungauged; besides, lack of hydrogeological data is quite typical, especially, in remote regions of Siberia. However, the developing economy and population safety make us to issue warnings based on reliable forecasts. For this purpose, a new hydrological model, MLCM3 (Multi-Layer Conceptual Model, 3rd generation) has been developed in the Russian State Hydrometeorological University. MLCM3 is a "rainfall-runoff"model with flexible structure and high level of"conceptualization".Model forcing includes precipitation and evaporation data basically coming from NWP model output. Water comes to the outlet through several layers; their number as well as two parameters (thickness and infiltration rate) for each of them, surface flow velocity (when the top layer is full of water) are optimized. The main advantage of the MLCM3, in comparison to the Sacramento Soil Moisture Accounting Model (SAC-SMA), Australian Water Balance Model (AWBM), Soil Moisture Accounting and Routing (SMAR) model and similar models, is that its automatic calibration is very fast and efficient with less volume of information. For instance, in comparison to SAC-SMA, which is calibrated using either Shuffled Complex Evolution algorithm (SCE-UA), or Stepwise Line Search (SLS), automatically calibrated MLCM3 gives better or comparable results without using any "a priori" data or essential processor resources. This advantage allows using the MLCM3 for very fast streamflow prediction in many basins. When assimilated NWP model output data used to force the model, the forecasts accuracy is quite acceptable and enough for automatic warning. Also please note that, in comparison to the 2nd generation of the model, a very useful new option has been added. Now it is possible to set upvariable infiltration rate of the top layer; this option is quite promising in terms of spring floods modeling. (At the moment it is necessary to perform more numerical experiments with snow melting; obtained results will be reported later). Recently new software for MLCM3 was developed. It contains quite usual and understandable options. Formation of the model "input" can be done in manual and automatic mode. Manual or automatic calibration of the model can be performed using either purposely developed for this model optimization algorithm, or Nelder-Mead's one, or SLS. For the model calibration, the multi-scale objective function (MSOF) proposed by Koren is used. It has shown its very high efficiency when model forcing data have high level of uncertainty. Other types of objective functions also can be used, such as mean square error and Nash-Sutcliff criterion. The model showed good results in more than 50 tested basins.
Spacing trials using the Nelder Wheel
Walter B. Mark
1983-01-01
The Nelder Wheel is a single tree systematic experimental design. Its major application is for plantation spacing experiments. The design allows for the testing of a number of spacings in a small area. Data obtained is useful in determining the response of stem diameter and crown diameter to spacing. Data is not compatible with data from conventional plots unless...
Data Retrieval Algorithm and Uncertainty Analysis for a Miniaturized, Laser Heterodyne Radiometer
NASA Astrophysics Data System (ADS)
Miller, J. H.; Melroy, H.; Wilson, E. L.; Clarke, G. B.
2013-12-01
In a collaboration between NASA Goddard Space Flight Center and George Washington University, a low-cost, surface instrument is being developed that can continuously monitor key carbon cycle gases in the atmospheric column: carbon dioxide (CO2) and methane (CH4). The instrument is based on a miniaturized, laser heterodyne radiometer (LHR) using near infrared (NIR) telecom lasers. Despite relatively weak absorption line strengths in this spectral region, spectrally-resolved atmospheric column absorptions for these two molecules fall in the range of 60-80% and thus sensitive and precise measurements of column concentrations are possible. Further, because the LHR technique has the potential for sub-Doppler spectral resolution, the possibility exists for interrogating line shapes to extract altitude profiles of the greenhouse gases. From late 2012 through 2013 the instrument was deployed for a variety of field measurements including at Park Falls, Wisconsin; Castle Airport near Atwater, California; and at the NOAA Mauna Loa Observatory in Hawaii. For each subsequent campaign, improvement in the figures of merit for the instrument (notably spectral sweep time and absorbance noise) has been observed. For the latter, the absorbance noise is approaching 0.002 optical density (OD) noise on a 1.8 OD signal. This presentation presents an overview of the measurement campaigns in the context of the data retrieval algorithm under development at GW for the calculation of column concentrations from them. For light transmission through the atmosphere, it is necessary to account for variation of pressure, temperature, composition, and refractive index through the atmosphere that are all functions of latitude, longitude, time of day, altitude, etc. In our initial work we began with coding developed under the LOWTRAN and MODTRAN programs by the AFOSR (and others). We also assumed temperature and pressure profiles from the 1976 US Standard Atmosphere and used the US Naval Observatory database for local zenith angle calculations to initialize path trajectory calculations. In our newest version of the retrieval algorithm, the Python programming language module PySolar is used for the path geometry calculations. For temperature, pressure, and humidity profiles with altitude we use the Modern-Era Retrospective Analysis for Research and Applications (MERRA) data that has been compiled every 6 hours. Spectral simulation is accomplished by integrating short-path segments along the trajectory using the SpecSyn spectral simulation suite developed at GW. Column concentrations are extracted by minimizing residuals between observed and modeled spectrum using the Nelder-Mead simplex algorithm as implemented in the SciPy Python module. We will also present an assessment of uncertainty in the reported concentrations from assumptions made in the meteorological data, LHR instrument and tracker noise, and radio frequency bandwidth and describe additional future goals in instrument development and deployment targets.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
76 FR 2579 - Safety Zone; Lake Mead Intake Construction, Lake Mead, Boulder City, NV
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
...-AA00 Safety Zone; Lake Mead Intake Construction, Lake Mead, Boulder City, NV AGENCY: Coast Guard, DHS... waters of Lake Mead in support of the construction project for Lake Mead's Intake 3 during the first 6... blasting operations for the placement of a water intake pipe in Lake Mead during the first 6 months of 2011...
75 FR 13232 - Safety Zone; Lake Mead Intake Construction, Lake Mead, Boulder City, NV
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-19
...-AA00 Safety Zone; Lake Mead Intake Construction, Lake Mead, Boulder City, NV AGENCY: Coast Guard, DHS... waters of Lake Mead in support of the construction project for Lake Mead's Intake 3. This safety zone is... for the placement of an Intake Pipe from Lake Mead throughout 2010. This safety zone is necessary to...
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
33 CFR 165.T11-281 - Safety Zone; Lake Mead Intake Construction; Lake Mead, Boulder City, NV.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety Zone; Lake Mead Intake Construction; Lake Mead, Boulder City, NV. 165.T11-281 Section 165.T11-281 Navigation and Navigable Waters... Coast Guard District § 165.T11-281 Safety Zone; Lake Mead Intake Construction; Lake Mead, Boulder City...
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofsetter and Carol Mead co-sponsored by the AAAG. clockwise Jack Boyd, Miss Mead daughter of Carol Mead, Carol Mead and Carolyn Hofstetter
Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne
2017-01-01
Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085
77 FR 58438 - Notice of Request To Release Airport Property
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-20
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Notice of Request To Release Airport... Release Airport Property at the Meade Municipal Airport (MEJ), Meade, KS. SUMMARY: The FAA proposes to rule and invites public comment on the release of land at the Meade Municipal Airport (MEJ), Meade...
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG. Group photo Front Row left to right; Carolyn Hofstetter, Jack Boyd, Carol Mead Middle Row: Kathy Lee, Annette Randall, Trincella Lewis, Ann Mead (daughter to Carol Mead), Vanessa Kuroda, Netti Halcomb Roozeboom Back Row; Dr Barbara Miller, Dr Wendy Okolo, Denise Snow, Leedjia Svec, Erika Rodriquez, Rhonda Baker, Ray Gilstrap, Glenn Bugos
ERIC Educational Resources Information Center
Biesta, Gert J. J.
1999-01-01
George Mead's posthumously published works express a genuine philosophy of education. This paper contributes to the reconstruction of Mead's educational philosophy, examining a typescript of student notes from his course on philosophy of education at the University of Chicago. The essay discusses the typescript against the backdrop of Mead's…
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
Franchetti, Palmarisa; Cappellacci, Loredana; Pasqualini, Michela; Petrelli, Riccardo; Jayaprakasan, Vetrichelvan; Jayaram, Hiremagalur N; Boyd, Donald B; Jain, Manojkumar D; Grifantini, Mario
2005-03-15
Thiazole-4-carboxamide adenine dinucleotide (TAD) analogues T-2'-MeAD (1) and T-3'-MeAD (2) containing, respectively, a methyl group at the ribose 2'-C-, and 3'-C-position of the adenosine moiety, were prepared as potential selective human inosine monophosphate dehydrogenase (IMPDH) type II inhibitors. The synthesis of heterodinucleotides was carried out by CDI-catalyzed coupling reaction of unprotected 2'-C-methyl- or 3'-C-methyl-adenosine 5'-monophosphate with 2',3'-O-isopropylidene-tiazofurin 5'-monophosphate, and then deisopropylidenation. Biological evaluation of dinucleotides 1 and 2 as inhibitors of recombinant human IMPDH type I and type II resulted in a good activity. Inhibition of both isoenzymes by T-2'-MeAD and T-3'-MeAD was noncompetitive with respect to NAD substrate. Binding of T-3'-MeAD was comparable to that of parent compound TAD, while T-2'-MeAD proved to be a weaker inhibitor. However, no significant difference was found in inhibition of the IMPDH isoenzymes. T-2'-MeAD and T-3'-MeAD were found to inhibit the growth of K562 cells (IC(50) 30.7 and 65.0muM, respectively).
Effects of Mead Wort Heat Treatment on the Mead Fermentation Process and Antioxidant Activity.
Czabaj, Sławomir; Kawa-Rygielska, Joanna; Kucharska, Alicja Z; Kliks, Jarosław
2017-05-14
The effects of mead wort heat treatment on the mead fermentation process and antioxidant activity were tested. The experiment was conducted with the use of two different honeys (multiflorous and honeydew) collected from the Lower Silesia region (Poland). Heat treatment was performed with the use of a traditional technique (gently boiling), the more commonly used pasteurization, and without heat treatment (control). During the experiment fermentation dynamics were monitored using high performance liquid chromatography with refractive index detection (HPLC-RID). Total antioxidant capacity (TAC) and total phenolic content (TPC) were estimated for worts and meads using UV/Vis spectrophotometric analysis. The formation of 5-hydroxymethylfurfural (HMF) was monitored by HPLC analyses. Heat treatment had a great impact on the final antioxidant capacity of meads.
G. H. Mead in the history of sociological ideas.
da Silva, Filipe Carreira
2006-01-01
My aim is to discuss the history of the reception of George Herbert Mead's ideas in sociology. After discussing the methodological debate between presentism and historicism, I address the interpretations of those responsible for Mead's inclusion in the sociological canon: Herbert Blumer, Jürgen Habermas, and Hans Joas. In the concluding section, I assess these reconstructions of Mead's thought and suggest an alternative more consistent with my initial methodological remarks. In particular, I advocate a reconstruction of Mead's ideas that apprehends simultaneously its evolution over time and its thematic breadth. Such a historically minded reconstruction can be not only a useful corrective to possible anachronisms incurred by contemporary social theorists, but also a fruitful resource for their theory-building endeavors. Only then can meaningful and enriching dialogue with Mead begin. Copyright 2006 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N /A
2003-10-27
The U.S. Highway 93 (U.S. 93) Hoover Dam Bypass Project calls for the U.S. Department of Energy (DOE) Western Area Power Administration (Western) to remove its Arizona and Nevada (A&N) Switchyard. As a result of this action, Western must reconfigure its existing electrical transmission system in the Hoover Dam area. Western proposes to double-circuit a portion of the Hoover-Mead No.5 and No.7 230-kV Transmission Lines with the Henderson-Mead No.1 Transmission Line (see Figure 1-1). Double-circuiting is the placement of two separate electrical circuits, typically in the form of three separate conductors or bundles of conductors, on the same set ofmore » transmission line structures. The old Henderson-Hoover 230-kV Transmission Line would become the new Henderson-Mead No.1 and would extend approximately eight miles to connect with the Mead Substation. Western owns, operates, and maintains the Hoover-Mead No.5 and No.7, and Henderson-Hoover electrical power transmission lines. Additionally, approximately 0.25 miles of new right-of-way (ROW) would be needed for the Henderson-Mead No.1 when it transfers from double-circuiting with the Hoover-Mead No.7 to the Hoover-Mead No.5 at the Boulder City Tap. The proposed project would also involve a new transmission line ROW and structures where the Henderson-Mead No.1 will split from the Hoover-Mead No.5 and enter the northeast corner of the Mead Substation. Lastly, Western has proposed adding fiber optic overhead ground wire from the Hoover Power Plant to the Mead Substation on to the Henderson-Mead No.1, Hoover-Mead No.5 and No.7 Transmission Lines. The proposed project includes replacing existing transmission line tower structures, installing new structures, and adding new electrical conductors and fiber optic cables. As a consequence of these activities, ground disturbance may result from grading areas for structure placement, constructing new roads, improving existing roads for vehicle and equipment access, and from installing structures, conductors, and fiber optic cables. Project construction activities would be conducted within the existing 200-foot transmission line ROW and 50-foot access road ROW, although new spur access roads could occur outside of existing ROWs. As lead Federal agency for this action under National Environmental Policy Act (NEPA), Western must ensure that adverse environmental effects on Federal and non-Federal lands and resources are avoided or minimized. This Environmental Assessment (EA) is intended to be a concise public document that assesses the probable and known impacts to the environment from Western's Proposed Action and alternatives, and reaches a conclusion about the significance of the impacts. This EA was prepared in compliance with NEPA regulations published by the Council on Environmental Quality (40 CFR 1500-1508) and implementing procedures of the Department of Energy (10 CFR 1021).« less
NASA Technical Reports Server (NTRS)
1982-01-01
Lake Mead, Nevada, (36.0N, 114.5E) where the water from the Colorado River empties after it's 273 mile journey through the Grand Canyon of Arizona is the subject of this photo. Other features of interest are Hoover Dam on the south shore of Lake Mead where cheap hydroelectric power is secondary to the water resources made available in this northern desert region and the resort city of Las Vegas, just to the west of Lake Mead.
A synthesis of aquatic science for management of Lakes Mead and Mohave
Rosen, Michael R.; Turner, Kent; Goodbred, Steven L.; Miller, Jennell M.
2012-01-01
Lakes Mead and Mohave, which are the centerpieces of Lake Mead National Recreation Area, provide many significant benefits that have made the modern development of the Southwestern United States possible. Lake Mead is the largest reservoir by volume in the nation and it supplies critical storage of water supplies for more than 25 million people in three Western States (California, Arizona, and Nevada). Storage within Lake Mead supplies drinking water and the hydropower to provide electricity for major cities including Las Vegas, Phoenix, Los Angeles, Tucson, and San Diego, and irrigation of more than 2.5 million acres of croplands. Lake Mead is arguably the most important reservoir in the nation because of its size and the services it delivers to the Western United States. This Circular includes seven chapters. Chapter 1 provides a short summary of the overall findings and management implications for Lakes Mead and Mohave that can be used to guide the reader through the rest of the Circular. Chapter 2 introduces the environmental setting and characteristics of Lakes Mead and Mohave and provides a brief management context of the lakes within the Colorado River system as well as overviews of the geological bedrock and sediment accumulations of the lakes. Chapter 3 contains summaries of the operational and hydrologic characteristics of Lakes Mead and Mohave. Chapter 4 provides information on water quality, including discussion on the monitoring of contaminants and sediments within the reservoirs. Chapter 5 describes aquatic biota and wildlife, including food-web dynamics, plankton, invertebrates, fish, aquatic birds, and aquatic vegetation. Chapter 6 outlines threats and stressors to the health of Lake Mead aquatic ecosystems that include a range of environmental contaminants, invasive species, and climate change. Chapter 7 provides a more detailed summary of overall findings that are presented in Chapter 1; and it contains a more detailed discussion on associated management implications, additional research, and monitoring needs.
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG. Left to right Computers Carolyn Hofstetter, Carol Mead and Jack Boyd
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG. Left to right Barbara Miller, Ames EEO, Computers Carolyn Hofstetter and Carol Mead
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG.. Left to right Barbara Miller, Ames EEO, Computers Carolyn Hofstetter and Carol Mead
76 FR 75917 - Post Office Closing
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... POSTAL REGULATORY COMMISSION [Docket No. A2012-63; Order No. 1004] Post Office Closing AGENCY... the closing of the Fort Meade, South Dakota post office has been filed. It identifies preliminary... Postal Service's determination to close the Fort Meade post office in Fort Meade, South Dakota. The...
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG. Jack Boyd talk of working in the same 6ft w.t. group as Carol Mead.
George Herbert Mead's Contribution to the Philosophy of American Education.
ERIC Educational Resources Information Center
Renger, Paul, III
1980-01-01
George Herbert Mead's general philsophy showed that he regarded the development of distinctively human behavior as essentially the result of an individual's meaningful participation in the social process of the community to which he belongs. Mead believed that education was a social process involving the meaningful interaction and communication…
NASA Technical Reports Server (NTRS)
1973-01-01
Lake Mead, Nevada, (36.0N, 114.5E) where the water from the Colorado River empties after it's 273 mile journey through the Grand Canyon of Arizona is the subject of this photo. Other features of interest are Hoover Dam on the south shore of Lake Mead where cheap hydroelectric power is secondary to the water resources made available in this northern desert region and the resort city of Las Vegas, just to the west of Lake Mead. In this harsh desert environment, color infrared photography readily penetrates haze, detects and portrays vegetation as shades of red.
Patriot/Medium Extended Air Defense System Combined Aggregate Program (Patriot/MEADS CAP)
2013-12-01
States, Germany, and Italy to replace the U.S. Patriot air defense systems, Patriot and Hawk systems in Germany, and the Nike system in Italy. The MEADS...Appropriation: RDT&E Contract Name Design & Development Contractor MEADS International Inc . Contractor Location 5600 W Sand Lake Road Orlando, FL 32819
2017-02-01
Ames Women's Influence Network (WIN) Hidden Figures talk with "Computers" Carolyn Hofstetter and Carol Mead co-sponsored by the AAAG. Left to right Barbara Miller, Ames EEO, Computers Carolyn Hoffstetter and Carol Mead, talking to Carolyn Hofstetter is Arlene Spencer
A Social Behaviorist Interpretation of the Meadian "I."
ERIC Educational Resources Information Center
Lewis, J. David
1979-01-01
Argues that the two most common interpretations of the Meadian "I" in George Herbert Mead's theory of social self have failed to place the concept within the context of Mead's philosophy of social behaviorism. Stresses also that Mead's theory of social behaviorist interpretation is applicable to sociological inquiry when it is properly…
Educating Communal Agents: Building on the Perspectivism of G.H. Mead
ERIC Educational Resources Information Center
Martin, Jack
2007-01-01
In their search for more communal forms of agency that might guide education, contemporary educational psychologists have mostly neglected the theorizing of George Herbert Mead. In this essay, Jack Martin aims to remedy such oversight by interpreting Mead's social-psychological and educational theorizing of selfhood and agency through the lenses…
ERIC Educational Resources Information Center
Franzosa, Susan Douglas
1984-01-01
Explores the implications of Mead's philosophic social psychology for current disputes concerning the nature of the scientific in educational studies. Mead's contextualization of the knower and the known are found to be compatible with a contemporary critique of positivist paradigms and a critical reconceptualization of educational inquiry.…
75 FR 427 - Notice of Continuation of Visitor Services
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-05
... under 36 CFR 51.23. Under the provisions of current concession contracts and pending the completion of... & Golden Gate National Recreation William Hontalas. Area. LAME001-73 Rex G. Maughan & Ruth G. Maughan Area...... Lake Mead National Recreation Area LAME002-82 Lake Mead RV Village, LLC Lake Mead National Recreation...
NASA Astrophysics Data System (ADS)
Mita, Masatoshi
2005-04-01
Resumption of meiosis in starfish oocytes is induced by the natural maturation-inducing hormone, 1-methyladenine (1-MeAde). Oocyte maturation is also induced by the disulfide-reducing agent, dithiothreitol (DTT). Previous studies have shown that 1-MeAde controls meiosis by interacting with its receptors, which are located exclusively on oocyte plasma membrane. However, little is known about the mechanism of oocyte maturation induced by DTT. Thus, this study examined whether DTT interacts with 1-MeAde receptors to induce oocyte maturation. When oocytes were treated with Triton X-100, they failed to respond to 1-MeAde and DTT. Although the Triton X-100-treated oocytes recovered the capacity to respond to 1-MeAde during incubation in seawater, they remained unresponsive to DTT during seawater incubations. These results suggest that DTT does not interact with 1-MeAde receptors to induce oocyte maturation in starfish. It is possible that a protein essential for mediating DTT-induced maturation is eliminated from the oocytes surface following Triton X-100 treatment.
Dorman, James W; Steinberg, Spencer M
2010-02-01
We report here a derivatization headspace method for the analysis of inorganic iodine in water. Samples from Lake Mead, the Las Vegas Wash, and from Las Vegas tap water were examined. Lake Mead and the Las Vegas Wash contained a mixture of both iodide and iodate. The average concentration of total inorganic iodine (TII) for Lake Mead was approximately 90 nM with an iodide-to-iodate ratio of approximately 1. The TII concentration (approximately 160 nM) and the ratio of iodide to iodate were higher for the Las Vegas Wash (approximately 2). The TII concentration for tap water was close to that of Lake Mead (approximately 90 nM); however, tap water contained no detectable iodide as a result of ozonation and chlorine treatment which converts all of the iodide to iodate.
1973-06-22
SL2-03-192 (22 June 1973) --- Lake Mead, Nevada, (36.0N, 114.5E) where the water from the Colorado River empties after it's 273 mile journey through the Grand Canyon of Arizona is the subject of this photo. Other features of interest are Hoover Dam on the south shore of Lake Mead where cheap hydroelectric power is secondary to the water resources made available in this northern desert region and the resort city of Las Vegas, just to the west of Lake Mead. In this harsh desert environment, color infrared photography readily penetrates haze, detects and portrays vegetation as shades of red. Photo credit: NASA
Langenheim, V.E.; Davidson, J.G.; Anderson, M.L.; Blank, H.R.
1999-01-01
The U.S. Geological Survey (USGS) collected 811 gravity stations on the Lake Mead 30' by 60' quadrangle from October, 1997 to September, 1999. These data were collected in support of geologic mapping of the Lake Mead quadrangle. In addition to these new data, gravity stations were compiled from a number of sources. These stations were reprocessed according to the reduction method described below and used for the new data. Density and magnetic susceptibility measurements were also performed on more than 250 rock samples. The Lake Mead quadrangle ranges from 360 to 360 30' north latitude and from 114° to 115° west longitude. It spans most of Lake Mead (see index map, below), the largest manmade lake in the United States, and includes most of the Lake Mead National Recreation Area. Its geology is very complex; Mesozoic thrust faults are exposed in the Muddy Mountains, Precambrian crystalline basement rocks are exhumed in tilted fault blocks near Gold Butte, extensive Tertiary volcanism is evident in the Black Mountains, and strike-slip faults of the right-lateral Las Vegas Valley shear zone and the left-lateral Lake Mead fault system meet near the Gale Hills. These gravity data and physical property measurements will aid in the 3-dimensional characterization of structure and stratigraphy in the quadrangle as part of the Las Vegas Urban Corridor mapping project.
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mohave (Colorado River), Ariz.-Nev. 162.220 Section 162.220 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
NASA Astrophysics Data System (ADS)
Smith, C. G.; Cable, J. E.; Martin, J. B.; Roy, M.
2008-05-01
Pore water distributions of 222Rn (t1/2 = 3.83 d), obtained during two sampling trips 9-12 May 2005 and 6-8 May 2006, are used to determine spatial and temporal variations of fluid discharge from a seepage face located along the mainland shoreline of Indian River Lagoon, Florida. Porewater samples were collected from a 30 m transect of multi-level piezometers and analyzed for 222Rn via liquid scintillation counting; the mean of triplicate measurements was used to represent the porewater 222Rn activities. Sediment samples were collected from five vibracores (0, 10, 17.5, 20, and 30 m offshore) and emanation rates of 222Rn (sediment supported) were determined using a standard cryogenic extraction technique. A conceptual 222Rn transport model and subsequent numerical model were developed based on the vertical distribution of dissolved and sediment-supported 222Rn and applicable processes occurring along the seepage face (e.g. advection, diffusion, and nonlocal exchange). The model was solved inversely with the addition of two Monte Carlo (MC) simulations to increase the statistical reliability of three parameters: fresh groundwater seepage velocity (v), irrigation intensity (α0), and irrigation attenuation (α1). The first MC simulation ensures that the Nelder-Mead minimization algorithm converges on a global minimum of the merit function and that the parameters estimates are consistent within this global minimum. The second MC simulation provides 90% confidence intervals on the parameter estimates using the measured 222Rn activity variance. Fresh groundwater seepage velocities obtained from the model decrease linearly with distance from the shoreline; seepage velocities range between 0.6 and 42.2 cm d-1. Based on this linear relationship, the terminus of the fresh groundwater seepage is approximately 25 m offshore and total fresh groundwater discharge for the May-2005 and May-2006 sampling trips are 1.16 and 1.45 m3 d-1 m-1 of shoreline, respectively. We hypothesize that the 25% increase in specific discharge between May-2005 and May- 2006 reflects higher recharge via precipitation to the Surficial aquifer during the highly active 2005 Atlantic hurricane season. Irrigation rates generally decrease offshore for both sampling periods; irrigation rates range between 4.9 and 85.7 cm d-1. Physical and biological mechanisms reasonable for the observed irrigation likely include density-driven convection, wave pumping, and bio-irrigation. The inclusion of both advective and nonlocal exchange processes in the model permits the separation of submarine groundwater discharge into fresh submarine groundwater discharge (seepage velocities) and (re)circulated lagoon water (as irrigation).
Jacobs, Glenn
2009-01-01
This analysis assesses the factors underlying Charles Horton Cooley's place in the sociological canon as they relate to George Herbert Mead's puzzling diatribe-echoed in secondary accounts-against Cooley's social psychology and view of the self published scarcely a year after his death. The illocutionary act of publishing his critique stands as an effort to project the image of Mead's intellectual self and enhance his standing among sociologists within and outside the orbit of the University of Chicago. It expressed Mead's ambivalence toward his precursor Cooley, whose influence he never fully acknowledged. In addition, it typifies the contending fractal distinctions of the scientifically discursive versus literary styles of Mead and Cooley, who both founded the interpretive sociological tradition. The contrasting styles and attitudes toward writing of the two figures are discussed, and their implications for the problems of scale that have stymied the symbolic interactionist tradition are explored.
A sociohistorical examination of George Herbert Mead's approach to science education.
Edwards, Michelle L
2016-07-01
Although George Herbert Mead is widely known for his social psychological work, his views on science education also represent a significant, yet sometimes overlooked contribution. In a speech delivered in March 1906 entitled "The Teaching of Science in College," Mead calls for cultural courses on the sciences, such as sociology of science or history of science courses, to increase the relevancy of natural and physical science courses for high school and university students. These views reflect Mead's perspective on a number of traditional dualisms, including objectivity versus subjectivity and the social sciences versus natural and physical sciences. Taking a sociohistorical outlook, I identify the context behind Mead's approach to science education, which includes three major influences: (1) German intellectual thought and the Methodenstreit debate, (2) pragmatism and Darwin's theory of evolution, and (3) social reform efforts in Chicago and the General Science Movement. © The Author(s) 2014.
George Herbert Mead's Lecture on Philosophy of Education at the University of Chicago (1910-1911).
ERIC Educational Resources Information Center
Biesta, Gert J. J.
This paper recounts the influence of two of the great educational philosophers of this century, John Dewey and George Herbert Mead. Both men came to the University of Chicago from teaching at the University of Michigan. The men were life-long personal friends and professional colleagues. Although Mead published little during his life, his…
Getting an "Inside": The Role of Objects in Mead's Theory of Self.
ERIC Educational Resources Information Center
McCarthy, E. Doyle
The paper examines George Herbert Mead's account of the individual's relation to the physical world. Mead (1863-1931) taught social psychology and philosophy at the University of Chicago from 1893-1931 and is best known for his theory of self. This theory maintains that the self is formed in a particular historical context and that it includes…
Using Multi-Objective Optimization to Explore Robust Policies in the Colorado River Basin
NASA Astrophysics Data System (ADS)
Alexander, E.; Kasprzyk, J. R.; Zagona, E. A.; Prairie, J. R.; Jerla, C.; Butler, A.
2017-12-01
The long term reliability of water deliveries in the Colorado River Basin has degraded due to the imbalance of growing demand and dwindling supply. The Colorado River meanders 1,450 miles across a watershed that covers seven US states and Mexico and is an important cultural, economic, and natural resource for nearly 40 million people. Its complex operating policy is based on the "Law of the River," which has evolved since the Colorado River Compact in 1922. Recent (2007) refinements to address shortage reductions and coordinated operations of Lakes Powell and Mead were negotiated with stakeholders in which thousands of scenarios were explored to identify operating guidelines that could ultimately be agreed on. This study explores a different approach to searching for robust operating policies to inform the policy making process. The Colorado River Simulation System (CRSS), a long-term water management simulation model implemented in RiverWare, is combined with the Borg multi-objective evolutionary algorithm (MOEA) to solve an eight objective problem formulation. Basin-wide performance metrics are closely tied to system health through incorporating critical reservoir pool elevations, duration, frequency and quantity of shortage reductions in the objective set. For example, an objective to minimize the frequency that Lake Powell falls below the minimum power pool elevation of 3,490 feet for Glen Canyon Dam protects a vital economic and renewable energy source for the southwestern US. The decision variables correspond to operating tiers in Lakes Powell and Mead that drive the implementation of various shortage and release policies, thus affecting system performance. The result will be a set of non-dominated solutions that can be compared with respect to their trade-offs based on the various objectives. These could inform policy making processes by eliminating dominated solutions and revealing robust solutions that could remain hidden under conventional analysis.
Raccoon ecological management area: partnership between Forest Service research and Mead Corporation
Daniel Yaussy; Wayne Lashbrook; Walt Smith
1997-01-01
The Chief of the Forest Service and the Chief Executive Officer of Mead Corporation signed a Memorandum of Understating (MOU) that created the Raccoon Ecological Management Area (REMA). This MOU designated nearly 17,000 acres as a special area to be co-managed by Mead and the Forest Service. The REMA is a working forest that continues to produce timber and pulpwood for...
The Economic Costs of a Shrinking Lake Mead: a Spatial Hedonic Analysis
NASA Astrophysics Data System (ADS)
Singh, A.; Saphores, J. D.
2017-12-01
Persistent arid conditions and population growth in the Southwest have taken a toll on the Colorado River. This has led to substantial drawdowns of many water reservoirs around the Southwest, and especially of Lake Mead, which is Las Vegas' main source of drinking water. Due to its importance, Lake Mead has received a great deal of media attention about its "bathtub ring" and the exposure of rock that used to be underwater. Drops in water levels have caused some local marinas to close, thereby affecting the aesthetic and recreational value of Lake Mead, which is located in the country's largest National Recreation Area (NRA), and surrounded by protected land. Although a rich literature analyzes how water quality impacts real estate values, relatively few studies have examined how dropping water levels are capitalized in surrounding residential properties. In this context, the goal of this study is to quantify how Lake Mead's water level changes are reflected in changes in local property values, an important source of tax income for any community. Since Lake Mead is the primary attraction within its recreation area, we are also concerned with how this recreation area, which is a few miles southeast of Las Vegas, is capitalized in real estate values of the Las Vegas metropolitan area as few valuation studies have examined how proximity to national parks influences residential property value. We estimate spatial hedonic and geographically weighted regression models of single family residences to delineate the value of proximity to the Lake Mead NRA and to understand how this value changed with Lake Mead's water levels. Our explanatory variables include common structural characteristics, fixed effects to account for unobserved locally constant characteristics, and specific variables such as distance to the Las Vegas strip and to downtown casinos. Because the sharpest declines in Lake Mead water levels happened in 2010 (NASA, 2010) and winter 2016 saw an unexpected increases in water levels, we analyze home sales and variations in water levels from 2010 to the mid 2017.
NASA Technical Reports Server (NTRS)
1994-01-01
The crewmen assigned to the STS-64 mission include: Astronaut Richard N. Richards (center front), mission commander; L. Blaine Hammond Jr., (front left) pilot and Susan J. Helms (front right) mission specialist. On the back row, from left to right are: Mark C. Lee, Jerry M. Linenger and Carl J. Meade, all mission specialists. All but Lee and Meade are wearing launch and entry suits. Lee and Meade are wearing extravehicular activity units (EMU).
ERIC Educational Resources Information Center
Jones, Charlotte
George Herbert Mead's theory of mind, self, and society is synthesized in this paper, as is the extension of that basic theory by Peter Berger and Thomas Luckmann. The paper argues that Mead's functionalist perspective, while rich and internally consistent, is naive in that it lacks a theory of institutions, and it shows how Berger and Luckmann's…
NASA Astrophysics Data System (ADS)
Furbish, Dean Russel
The purpose of this study is to examine pragmatist constructivism as a science education referent for adult learners. Specifically, this study seeks to determine whether George Herbert Mead's doctrine, which conflates pragmatist learning theory and philosophy of natural science, might facilitate (a) scientific concept acquisition, (b) learning scientific methods, and (c) preparation of learners for careers in science and science-related areas. A philosophical examination of Mead's doctrine in light of these three criteria has determined that pragmatist constructivism is not a viable science education referent for adult learners. Mead's pragmatist constructivism does not portray scientific knowledge or scientific methods as they are understood by practicing scientists themselves, that is, according to scientific realism. Thus, employment of pragmatist constructivism does not adequately prepare future practitioners for careers in science-related areas. Mead's metaphysics does not allow him to commit to the existence of the unobservable objects of science such as molecular cellulose or mosquito-borne malarial parasites. Mead's anti-realist metaphysics also affects his conception of scientific methods. Because Mead does not commit existentially to the unobservable objects of realist science, Mead's science does not seek to determine what causal role if any the hypothetical objects that scientists routinely posit while theorizing might play in observable phenomena. Instead, constructivist pragmatism promotes subjective epistemology and instrumental methods. The implication for learning science is that students are encouraged to derive scientific concepts based on a combination of personal experience and personal meaningfulness. Contrary to pragmatist constructivism, however, scientific concepts do not arise inductively from subjective experience driven by personal interests. The broader implication of this study for adult education is that the philosophically laden claims of constructivist learning theories need to be identified and assessed independently of any empirical support that these learning theories might enjoy. This in turn calls for educational experiences for graduate students of education that incorporate philosophical understanding such that future educators might be able to recognize and weigh the philosophically laden claims of adult learning theories.
View of Lake Mead and Las Vegas, Nevada area from Sklyab
NASA Technical Reports Server (NTRS)
1973-01-01
A vertical view of the Lake Mead and Las Vegas, Nevada area as photographed from Earth orbit by one of the six lenses of the Itek-furnished S190-A Multispectral Photographic Facility Experiment aboard the Skylab space station. Lake Mead is water of the Colorado River impounded by Hoover Dam. Most of the land in the picture is Nevada, however, a part of the northwest corner of Arizona can be seen.
Evaporation from Lake Mead, Arizona and Nevada, 1997-99
Westenburg, Craig L.; DeMeo, Guy A.; Tanko, Daron J.
2006-01-01
Lake Mead is one of a series of large Colorado River reservoirs operated and maintained by the Bureau of Reclamation. The Colorado River system of reservoirs and diversions is an important source of water for millions of people in seven Western States and Mexico. The U.S. Geological Survey, in cooperation with the Bureau of Reclamation, conducted a study from 1997 to 1999 to estimate evaporation from Lake Mead. For this study, micrometeorological and hydrologic data were collected continually from instrumented platforms deployed at four locations on the lake, open-water areas of Boulder Basin, Virgin Basin, and Overton Arm and a protected cove in Boulder Basin. Data collected at the platforms were used to estimate Lake Mead evaporation by solving an energy-budget equation. The average annual evaporation rate at open-water stations from January 1998 to December 1999 was 7.5 feet. Because the spatial variation of monthly and annual evaporation rates was minimal for the open-water stations, a single open-water station in Boulder Basin would provide data that are adequate to estimate evaporation from Lake Mead.
The Effect of the Nunn-McCurdy Amendment on Unit-Cost-Growth of Defense Acquisition Projects
2010-07-01
Congressional testimony: “I consider Virginia Class cost-reduction efforts a model for all our ships, submarines, and aircraft” (Roughhead, 2009...PATRIOT/MEADS CAP- MISSLE 2004 DE STRYKER 2004 PdE WIN-T INCREMENT 1 2007 PdE WIN-T INCREMENT 2 2007 DE Subtot~l Navy: ADS (ANJWOR-3) 2005 DE AGM...JAVELIN JLENS LONGBOW APACHE LUH PATRIOT PAC-3 PATRIOT/MEADS CAP - FIRE UNIT PATRIOT/MEADS CAP - MISSLE STRYKER WIN-T INCREMENT 1 WIN-T
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Beard, L.S.; Anderson, R.E.; Block, D.L.; Bohannon, R.G.; Brady, R.J.; Castor, S.B.; Duebendorfer, E.M.; Faulds, J.E.; Felger, T.J.; Howard, K.A.; Kuntz, M.A.; Williams, V.S.
2007-01-01
Introduction The geologic map of the Lake Mead 30' x 60' quadrangle was completed for the U.S. Geological Survey's Las Vegas Urban Corridor Project and the National Parks Project, National Cooperative Geologic Mapping Program. Lake Mead, which occupies the northern part of the Lake Mead National Recreation Area (LAME), mostly lies within the Lake Mead quadrangle and provides recreation for about nine million visitors annually. The lake was formed by damming of the Colorado River by Hoover Dam in 1939. The recreation area and surrounding Bureau of Land Management lands face increasing public pressure from rapid urban growth in the Las Vegas area to the west. This report provides baseline earth science information that can be used in future studies of hazards, groundwater resources, mineral and aggregate resources, and of soils and vegetation distribution. The preliminary report presents a geologic map and GIS database of the Lake Mead quadrangle and a description and correlation of map units. The final report will include cross-sections and interpretive text. The geology was compiled from many sources, both published and unpublished, including significant new mapping that was conducted specifically for this compilation. Geochronologic data from published sources, as well as preliminary unpublished 40Ar/39Ar ages that were obtained for this report, have been used to refine the ages of formal Tertiary stratigraphic units and define new informal Tertiary sedimentary and volcanic units.
32 CFR Appendix B to Part 518 - Addressing FOIA Requests
Code of Federal Regulations, 2011 CFR
2011-07-01
... Avenue, Fort George G. Meade, MD 20755-5360. (6) Records involving debarred or suspended contractors..., ATTN: IAMG-CIC-FOI/PO, 4552 Pike Road, Fort George G. Meade, MD 20755-5995. (k) Inspector General...
32 CFR Appendix B to Part 518 - Addressing FOIA Requests
Code of Federal Regulations, 2014 CFR
2014-07-01
... Avenue, Fort George G. Meade, MD 20755-5360. (6) Records involving debarred or suspended contractors..., ATTN: IAMG-CIC-FOI/PO, 4552 Pike Road, Fort George G. Meade, MD 20755-5995. (k) Inspector General...
32 CFR Appendix B to Part 518 - Addressing FOIA Requests
Code of Federal Regulations, 2012 CFR
2012-07-01
... Avenue, Fort George G. Meade, MD 20755-5360. (6) Records involving debarred or suspended contractors..., ATTN: IAMG-CIC-FOI/PO, 4552 Pike Road, Fort George G. Meade, MD 20755-5995. (k) Inspector General...
32 CFR Appendix B to Part 518 - Addressing FOIA Requests
Code of Federal Regulations, 2013 CFR
2013-07-01
... Avenue, Fort George G. Meade, MD 20755-5360. (6) Records involving debarred or suspended contractors..., ATTN: IAMG-CIC-FOI/PO, 4552 Pike Road, Fort George G. Meade, MD 20755-5995. (k) Inspector General...
Improvements to Air Force Strategic Basing Decisions
2016-01-01
Fort Lee, VA 10,200 4,600 14,800 138,000 Fort Lewis, WA 13,500 17,400 30,900 3,422,000 Fort Meade , MD 7,000 4,200 11,200 2,512,000 Fort Sam...Aberdeen Proving Grounds, Fort Meade , and the Bethesda National Naval Medical Center were scheduled to add over 12,000 personnel in total. The state of...Influences on Air Force Basing Decisions According to Edward Mead Earle, “Strategy is the art of controlling and utilizing the resources of a nation—or a
1988-03-29
PERIOD COVERED A Comparison of the Operational Art of George G. Meade and Robert E. Lee During the Period Jun 1 8 6 3 Indiv study project to March...publication until it has been cleared by the appropriate military Service er gover"uent aency. A Comparison Of The Operational Art Of George Gordon...and/or Dis Speclal;! Sp ABSTRACT AUTHOR: THOMAS A. GREEN, LTC, AV. TITLE: A Comparison of the Operational Art of George Gordon Meade and Robert Edward
Infant formula companies battle for breast.
1996-01-01
The infant formula manufacturer Mead Johnson has filed a lawsuit in Ontario courts against its competitor Ross Abbott for false advertising of its new Similac brand of infant formula. Mead Johnson contends that the Ross Abbott advertisement of Similac as providing benefits similar to mother's milk is false and misleading. Breast feeding specialists agree with Mead Johnson's claim. Yet, one year earlier, Mead Johnson claimed that its infant formula is modeled after mother's milk. The Infant Feeding Action Coalition (INFACT) Canada had complained to the Competition Bureau that called for Mead Johnson to cease its claim. Court documents reveal that both companies disregard the World Health Organization (WHO) International Code of Marketing of Breast Milk Substitutes. This code prohibits manufacturers from advertising directly to pregnant women and mothers. Two Ross Abbott spokespersons have different responses to their advertising practices: increasing emphasis on consumer promotion and support of the principle and objective of the WHO Code. Both companies have sought support of health professionals in Canada. In July 1996 Mead Johnson sent letters to about 7000 clinicians declaring "as someone who cares about infant health and nutrition as much as we do" and "...the most alarming concern is that, although there is no scientific basis for such claims, mothers believe them to be true." Ross Abbott responded to these letters by sending physicians letters declaring "Our business is built on trust, and we assure you that you may trust in Similac Advance and the benefits we have ascribed to it." The two companies will meet again in court on September 30, 1996.
Ten years of research on the MeadWestvaco Wildlife and Ecosystem Research Forest
P.D. Keyser; W.M. Ford; W.M. Ford
2005-01-01
Contains 90 citations and annotations of publications and final reports that describe research conducted on or in association with the MeadWestvaco Wildlife and Ecosystem Research Forest in West Virginia from 1994 through 2004.
Stress field rotation or block rotation: An example from the Lake Mead fault system
NASA Technical Reports Server (NTRS)
Ron, Hagai; Nur, Amos; Aydin, Atilla
1990-01-01
The Coulomb criterion, as applied by Anderson (1951), has been widely used as the basis for inferring paleostresses from in situ fault slip data, assuming that faults are optimally oriented relative to the tectonic stress direction. Consequently if stress direction is fixed during deformation so must be the faults. Freund (1974) has shown that faults, when arranged in sets, must generally rotate as they slip. Nur et al., (1986) showed how sufficiently large rotations require the development of new sets of faults which are more favorably oriented to the principal direction of stress. This leads to the appearance of multiple fault sets in which older faults are offset by younger ones, both having the same sense of slip. Consequently correct paleostress analysis must include the possible effect of fault and material rotation, in addition to stress field rotation. The combined effects of stress field rotation and material rotation were investigated in the Lake Meade Fault System (LMFS) especially in the Hoover Dam area. Fault inversion results imply an apparent 60 degrees clockwise (CW) rotation of the stress field since mid-Miocene time. In contrast structural data from the rest of the Great Basin suggest only a 30 degrees CW stress field rotation. By incorporating paleomagnetic and seismic evidence, the 30 degrees discrepancy can be neatly resolved. Based on paleomagnetic declination anomalies, it is inferred that slip on NW trending right lateral faults caused a local 30 degrees counter-clockwise (CCW) rotation of blocks and faults in the Lake Mead area. Consequently the inferred 60 degrees CW rotation of the stress field in the LMFS consists of an actual 30 degrees CW rotation of the stress field (as for the entire Great Basin) plus a local 30 degrees CCW material rotation of the LMFS fault blocks.
Stress field rotation or block rotation: An example from the Lake Mead fault system
NASA Astrophysics Data System (ADS)
Ron, Hagai; Nur, Amos; Aydin, Atilla
1990-02-01
The Coulomb criterion, as applied by Anderson (1951), has been widely used as the basis for inferring paleostresses from in situ fault slip data, assuming that faults are optimally oriented relative to the tectonic stress direction. Consequently if stress direction is fixed during deformation so must be the faults. Freund (1974) has shown that faults, when arranged in sets, must generally rotate as they slip. Nur et al., (1986) showed how sufficiently large rotations require the development of new sets of faults which are more favorably oriented to the principal direction of stress. This leads to the appearance of multiple fault sets in which older faults are offset by younger ones, both having the same sense of slip. Consequently correct paleostress analysis must include the possible effect of fault and material rotation, in addition to stress field rotation. The combined effects of stress field rotation and material rotation were investigated in the Lake Meade Fault System (LMFS) especially in the Hoover Dam area. Fault inversion results imply an apparent 60 degrees clockwise (CW) rotation of the stress field since mid-Miocene time. In contrast structural data from the rest of the Great Basin suggest only a 30 degrees CW stress field rotation. By incorporating paleomagnetic and seismic evidence, the 30 degrees discrepancy can be neatly resolved. Based on paleomagnetic declination anomalies, it is inferred that slip on NW trending right lateral faults caused a local 30 degrees counter-clockwise (CCW) rotation of blocks and faults in the Lake Mead area. Consequently the inferred 60 degrees CW rotation of the stress field in the LMFS consists of an actual 30 degrees CW rotation of the stress field (as for the entire Great Basin) plus a local 30 degrees CCW material rotation of the LMFS fault blocks.
77 FR 42713 - Notice of Intent to Grant an Exclusive License; PadJack, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Security Agency Technology Transfer Program, 9800 Savage Road, Suite 6541, Fort George G. Meade, MD 20755... Savage Road, Suite 6541, Fort George G. Meade, MD 20755-6541, telephone (443) 479-9569. Dated: July 17...
NASA Astrophysics Data System (ADS)
Castle, S.; Famiglietti, J. S.; Reager, J. T.; Thomas, B.
2012-12-01
Over the past decade the Western US has experienced extreme drought conditions, which have affected both agricultural and urban areas. An example of water infrastructure being impacted by these droughts is Lake Mead, the largest reservoir in the United States at its full capacity that provides water and energy for several states in the Western US. Once Lake Mead falls below the critical elevation of 1050 feet above sea level, the Hoover Dam, the structure that created Lake Mead by damming flow within the Colorado River, will stop producing energy for Las Vegas. The Gravity Recovery and Climate Experiment (GRACE) satellites, launched in 2002, have proven successful for monitoring changes in water storage over large areas, and give hydrologists a first-ever picture of how total water storage is changing spatially and temporally within large regions. Given the importance of the Colorado River to meet water demands to several neighboring regions, including Southern California, it is vital to understand how water is transported and managed throughout the basin. In this research, we use hydrologic remote sensing to characterize the human and natural water balance of the Colorado River basin and Lake Mead. The research will include quantifying the amount of Colorado River water delivered to Southern California, coupling the GRACE Total Water Storage signal of the Upper and Lower Colorado River with Landsat-TM satellite imagery and areal extent of Lake Mead water storage, and combining these data together to determine the current status of water availability in the Western US. We consider water management and policy changes necessary for sustainable water practices including human water use, hydropower, and ecosystem services in arid regions throughout the Western US.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
..., California, Grey's Mountain Ecosystem Restoration Project AGENCY: Forest Service, USDA. ACTION: Notice of... a series of ecological restoration treatments, north of the community of Bass Lake, California, south of Soquel Meadow, east of Nelder Grove Historical Area and west of Graham Mountain. Treatment...
A technical report on structural evaluation of the Meade County reinforced concrete bridge.
DOT National Transportation Integrated Search
2009-01-01
This is a technical report on the first phase of the evaluation of the Meade County reinforced concrete bridge. : The first three chapters introduce the main problem and provide a general review of the existing evaluation : methods and the procedures...
Plague and contagionism in eighteenth-century England: the role of Richard Mead.
Zuckerman, Arnold
2004-01-01
An epidemic of plague in Marseilles in 1720 and the fear that it would spread to England led to the passing of a new quarantine act. First, however, the government sought medical advice from Dr. Richard Mead (1673-1754), which took the form of A Short Discourse Concerning Pestilential Contagion, and the Methods to Be Used to Prevent It. This tract was a contribution to the contagion concept of disease at a time when it had not yet become part of the medical mainstream as an explanation for certain epidemic diseases. Critical works appeared almost immediately attacking Mead's ideas. The Short Discourse went through nine editions, the last in 1744. In the last two editions there are further elaborations of his earlier views and references to Newton's Optics and the ether theory. Some of Mead's practical recommendations for dealing with the plague, should it enter the country, were relatively new. References to his plague tract appeared in a number of medical and nonmedical works well beyond his lifetime.
Bottom Sediment as a Source of Organic Contaminants in Lake Mead, Nevada, USA
Treated wastewater effluent from Las Vegas, Nevada and surrounding communities’ flow through Las Vegas Wash (LVW) into the Lake Mead National Recreational Area at Las Vegas Bay (LVB). Lake sediment is a likely sink for many hydrophobic synthetic organic compounds (SOCs); however,...
Rosen, Michael R.; Alvarez, David A.; Goodbred, Steven L.; Leiker, Thomas J.; Patino, Reynaldo
2009-01-01
compounds (SOCs) at pg L-1 concentrations. Semi-permeable membrane devices and POCIS were deployed in Lake Mead, at two sites in Las Vegas Wash, at four sites across Lake Mead, and in the Colorado River downstream from Hoover Dam. Concentrations of hydrophobic SOCs were highest in Las Vegas Wash downstream from waste water and urban inputs and at 8 m depth in Las Vegas Bay (LVB) where Las Vegas Wash enters Lake Mead. Th e distribution of hydrophobic SOCs showed a lateral distribution across 10 km of Lake Mead from LVB to Boulder Basin. To assess possible vertical gradients of SOCs, SPMDs were deployed at 4-m intervals in 18 m of water in LVB. Fragrances and legacy SOCs were found at the greatest concentrations at the deepest depth. Th e vertical gradient of SOCs indicated that contaminants were generally confi ned to within 6 m of the lake bottom during the deployment interval. The high SOC concentrations, warmer water temperatures, and higher total dissolved solids concentrations at depth are indicative of a plume of Las Vegas Wash water moving along the lake bottom. Th e lateral and vertical distribution of SOCs is discussed in the context of other studies that have shown impaired health of fi sh exposed to SOCs.
A Constructive Task in Religious Education: Making Christian Selves.
ERIC Educational Resources Information Center
Anderson, E. Byron
1998-01-01
Discusses the self as socially constructed by comparing George Herbert Mead and Robert Kegan and then addresses the construction of the religious self by focusing on Mead, Kegan, and George Lindbeck. Considers the "theonomous self" (identity related to God). Concludes by discussing ecclesial practices that correspond to the theonomous…
Mead and Dewey: Thematic Connections on Educational Topics.
ERIC Educational Resources Information Center
Dennis, Lawrence J.; Stickel, George W.
1981-01-01
Common themes emerge from the writings of John Dewey and George Herbert Mead on four educational topics discussed here: (1) play; (2) science teaching; (3) history teaching; and (4) industrial education. Both men deplored the fragmentation of education and believed moral insight could be furthered through social understanding, science, and…
DOT National Transportation Integrated Search
2009-01-01
Meade County Bridge is a two-lane highway reinforced concrete bridge with two girders each with 20 continuous spans. The bridge was built in 1965. It has been reported that in early years of the bridge service period, a considerable amount of cracks ...
Self-consolidating concrete repairs on Interstate 25 bridge abutments north of Mead.
DOT National Transportation Integrated Search
2013-07-01
In August of 2011 CDOT performed maintenance on Interstate 25 bridges D-17-DA and DB on I-25 north of Mead, CO. : The maintenance was performed using self-consolidating concrete (SCC), and the methods were based on a study performed : by the Colorado...
Margaret Mead's Early Fieldwork: Methods and Implications for Education.
ERIC Educational Resources Information Center
Kincheloe, Teresa Scott
1980-01-01
Reviews the early career of Margaret Mead (1928-1942) and study methods she used in Samoa, New Guinea, and Bali. Particular attention is paid to her examinations of sex roles and her own experiences as a female scientist. (Part of a theme issue on anthropological methods in educational research.) (SJL)
DOT National Transportation Integrated Search
2004-04-19
The Federal Aviation Administration (FAA), in cooperation with the National Park Service (NPS), has initiated the development of an Air Tour Management Plan (ATMP) for Lake Mead National Recreation Area (LAME) pursuant to the National Parks Air Tour ...
Persistence of echimidine, a hepatotoxic pyrrolizidine alkaloid, from honey into mead
USDA-ARS?s Scientific Manuscript database
Honey produced by bees foraging on Echium plantagineum is known to contain dehydropyrrolizidine alkaloids characteristic of the plant. Following a prolific growth of E. plantagineum in the wake of Australian bushfires, two samples of mead, a fermented drink made from honey, and the honey used to pre...
Further advances in predicting species distributions
Gretchen G. Moisen; Thomas C. Edwards; Patrick E. Osborne
2006-01-01
In 2001, a workshop focused on the use of generalized linear models (GLM: McCullagh and Nelder, 1989) and generalized additive models (GAM: Hastie and Tibshirani, 1986, 1990) for predicting species distributions was held in Riederalp, Switzerland. This topic led to the publication of special issues in Ecological Modelling (Guisan et al., 2002) and Biodiversity and...
2015-07-20
Lake Mead supplies water for Arizona, California, Mexico, and other western states. On June 23, the water level fell to 1075 feet, a record low. In 2000, for comparison, the water level was at 1214 feet. A 15-year drought and increased demands for water are to blame for the critical status of the water supply. The difference in 15 years is seen in this pair of images of the western part of Lake Mead, acquired June 21, 2000 by Landsat 7, and June 21, 2015 by ASTER. The images cover an area of 22.5 x 28.5 km, and are located at 36.1 degrees north, 114.7 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA19731
Fort Meade demonstration test LEDS in freezer rooms, fiber optics in display cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Steven; Parker, Graham B.
2008-10-25
Demonstration projects at Fort George G. Meade, MD, substituted LED lighting for incandescent bulbs in commisary wal-in freezers and fiber optic lighting in reach-in display cases. The goal was to reduce energy consumption and the results were positive. Journal article published in Public Works Digest
78 FR 25259 - Notice of Intent To Grant an Exclusive License; Integrata Security, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-30
..., 9800 Savage Road, Suite 6848, Fort George G. Meade, MD 20755-6848. FOR FURTHER INFORMATION CONTACT: Marian T. Roche, Director, Technology Transfer Program, 9800 Savage Road, Suite 6848, Fort George G. Meade, MD 20755-6848, telephone (443) 634-3514. Dated: April 24, 2013. Aaron Siegel, Alternate OSD...
Contorting Identities: Figuring Literacy and Identity in Adolescent Worlds
ERIC Educational Resources Information Center
Quinlan, A.; Curtin, A.
2017-01-01
This paper explores connections and disconnects between identity and literacy for a group of adolescents in a second level classroom setting. We build on Mead and Vygotsky's conceptualisations of identity formation as an intricate emergent happening constantly formed/reformed by people, in their interactions with others [Mead, G. H. 1999.…
Consuming Anomie: Children and Global Commercial Culture. Research Note
ERIC Educational Resources Information Center
Langer, Beryl
2005-01-01
This article locates George Herbert Meads account of self-formation in the context of global consumer capitalism, in which the "generalized other" is constructed as a desiring consumer. It argues for a sociology of consumer childhood that, via Mead, takes children's agency as a given and explores the implications of their interaction with the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... Coated Board, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of MeadWestvaco Coated Board, LLC's application for market-based rate authority, with an accompanying rate schedule...
75 FR 5115 - Temporary Concession Contract for Lake Mead National Recreation Area, AZ/NV
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... National Recreation Area, AZ/NV AGENCY: National Park Service, Department of the Interior. ACTION: Notice of intention to award temporary concession contract for Lake Mead National Recreation Area. SUMMARY: Pursuant to 36 CFR 51.24, public notice is hereby given that the National Park Service intends to award a...
van Goor, S A; Schaafsma, A; Erwich, J J H M; Dijck-Brouwer, D A J; Muskiet, F A J
2010-01-01
We showed that docosahexaenoic acid (DHA) supplementation during pregnancy and lactation was associated with more mildly abnormal (MA) general movements (GMs) in the infants. Since this finding was unexpected and inter-individual DHA intakes are highly variable, we explored the relationship between GM quality and erythrocyte DHA, arachidonic acid (AA), DHA/AA and Mead acid in 57 infants of this trial. MA GMs were inversely related to AA, associated with Mead acid, and associated with DHA/AA in a U-shaped manner. These relationships may indicate dependence of newborn AA status on synthesis from linoleic acid. This becomes restricted during the intrauterine period by abundant de novo synthesis of oleic and Mead acids from glucose, consistent with reduced insulin sensitivity during the third trimester. The descending part of the U-shaped relation between MA GMs and DHA/AA probably indicates DHA shortage next to AA shortage. The ascending part may reflect a different developmental trajectory that is not necessarily unfavorable. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Protzman, G.M.; Mitra, G.
The emplacement history of a thrust sheet is recorded by the strain accumulated in its hanging wall and footwall. Detailed studies of second order structures and analysis of strain due to pressure solution and plastic deformation allow the authors to determine the deformation history of the Meade thrust in the Idaho - Wyoming thrust belt. Emplacement of the Meade thrust was accompanied by the formation of a series of second order in echelon folds in the footwall. Temporal relations based on detailed structural studies show that these folds, which are confined to the Jurassic Twin Creek Formation, formed progressively inmore » front of the advancing Meade thrust and were successively truncated and overridden by footwall imbricates of the Meade thrust. The Twin Creek Formation in both the hanging wall and footwall of the Meade thrust is penetratively deformed, with a well developed pressure solution cleavage. In addition, plastic strain is recorded by deformed Pentacrinus within fossil hash layers in the Twin Creek. Much of this penetrative deformation took place early in the history of the thrust sheet as layer parallel shortening, and the cleavage and deformed fossils behaved passively during subsequent folding and faulting. The later stages of deformation may be sequentially removed through balancing techniques to track successive steps in the deformation. This strain history, which is typical of an internal thrust sheet, is partly controlled by the lithologies involved, timing between successive thrusts, and the amount of interaction between major faults.« less
Boyd, Robert A.; Furlong, Edward T.
2002-01-01
The U.S. Geological Survey and the National Park Service conducted a reconnaissance study to investigate the occurrence of selected human-health pharmaceutical compounds in water samples collected from Lake Mead on the Colorado River and Las Vegas Wash, a waterway used to transport treated wastewater from the Las Vegas metropolitan area to Lake Mead. Current research indicates many of these compounds can bioaccumulate and may adversely affect aquatic organisms by disrupting physiological processes, impairing reproductive functions, increasing cancer rates, contributing to the development of antibiotic-resistant strains of bacteria, and acting in undesirable ways when mixed with other substances. These compounds may be present in effluent because a high percentage of prescription and non-prescription drugs used for human-health purposes are excreted from the body as a mixture of parent compounds and degraded metabolite compounds; also, they can be released to the environment when unused products are discarded by way of toilets, sinks, and trash in landfills. Thirteen of 33 targeted compounds were detected in at least one water sample collected between October 2000 and August 2001. All concentrations were less than or equal to 0.20 micrograms per liter. The most frequently detected compounds in samples from Las Vegas Wash were caffeine, carbamazepine (used to treat epilepsy), cotinine (a metabolite of nicotine), and dehydronifedipine (a metabolite of the antianginal Procardia). Less frequently detected compounds in samples collected from Las Vegas Wash were antibiotics (clarithromycin, erythromycin, sulfamethoxazole, and trimethoprim), acetaminophen (an analgesic and anti-inflammatory), cimetidine (used to treat ulcers), codeine (a narcotic and analgesic), diltiazem (an antihypertensive), and 1,7-dimethylxanthine (a metabolite of caffeine). Fewer compounds were detected in samples collected from Lake Mead than from Las Vegas Wash. Caffeine was detected in all samples collected from Lake Mead. Other compounds detected in samples collected from Lake Mead were acetaminophen, carbamazepine, cotinine, 1,7-dimethylxanthine, and sulfamethoxazole.
Reinterpreting Internalization and Agency through G. H. Mead's Perspectival Realism
ERIC Educational Resources Information Center
Martin, Jack
2006-01-01
Toward the end of his life, George Herbert Mead developed a theory of perspectives that may be used to reinterpret his social, developmental psychology. This paper attempts such a reinterpretation, leading to the emergence of a theory of perspective taking in early childhood that looks quite different from that which is assumed in most extant work…
36 CFR 7.48 - Lake Mead National Recreation Area.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Lake Mead National Recreation Area. 7.48 Section 7.48 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE... watercraft at a speed in excess of flat wake speed within 200 feet of any beach occupied by bathers, boats at...
36 CFR 7.48 - Lake Mead National Recreation Area.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Lake Mead National Recreation Area. 7.48 Section 7.48 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE... watercraft at a speed in excess of flat wake speed within 200 feet of any beach occupied by bathers, boats at...
36 CFR 7.48 - Lake Mead National Recreation Area.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Lake Mead National Recreation Area. 7.48 Section 7.48 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE... watercraft at a speed in excess of flat wake speed within 200 feet of any beach occupied by bathers, boats at...
36 CFR 7.48 - Lake Mead National Recreation Area.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Lake Mead National Recreation Area. 7.48 Section 7.48 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE... watercraft at a speed in excess of flat wake speed within 200 feet of any beach occupied by bathers, boats at...
36 CFR 7.48 - Lake Mead National Recreation Area.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Lake Mead National Recreation Area. 7.48 Section 7.48 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE... watercraft at a speed in excess of flat wake speed within 200 feet of any beach occupied by bathers, boats at...
Revolution and Counter-Revolution: Network Mobilization to Preserve Public Education in Wisconsin
ERIC Educational Resources Information Center
Kelley, Carolyn; Mead, Julie
2017-01-01
In this article, Kelley and Mead consider changes in the policymaking process in Wisconsin before the election of Governor Walker, in the early years following his election, and in the months preceding passage of the 2015-17 biennial budget. Kelley and Mead argue that in Wisconsin, serious and significant attacks to public education motivated by…
AmeriFlux US-Ne3 Mead - rainfed maize-soybean rotation site
Suyker, Andy [University of Nebraska - Lincoln
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Ne3 Mead - rainfed maize-soybean rotation site. Site Description - The study site is one of three fields (all located within 1.6 km of each other) at the University of Nebraska Agricultural Research and Development Center near Mead, Nebraska. While the other two sites are equipped with irrigation systems, this site relies on rainfall. A tillage operation (disking) was done just prior to the 2001 planting to homogenize the top 0.1 m of soil, incorporate P and K fertilizers, as well as previously accumulated surface residues. Since initiation of the study in 2001, this site has been under no-till management.
George Herbert Mead and the Allen controversy at the University of Wisconsin.
Cook, Gary A
2007-01-01
This essay uses previously unpublished correspondence of George Herbert Mead to tell the story of his involvement in the aftermath of a political dispute that took place at the University of Wisconsin during the years 1914-1915. It seeks thereby to clarify the historical significance of an article he published on this controversy in late 1915. Taken together with relevant information about the educational activities of William H. Allen of the New York Bureau of Municipal Research, Mead's correspondence and article throw helpful light upon his understanding of how an educational survey of a university should proceed; they also show how he went about the task of evaluating a failed attempt at such a survey. (c) 2007 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultheisz, D.; Ward, L.
1994-04-01
This report presents the results of the Community Environmental Response Facilitation Act (CERFA) investigation conducted by Environmental Resources Management (ERM) at Fort George G. Meade (FGGM), a U.S. Government property selected for closure by the Base Realignment and Closure (BRAC) Commission. Under CERFA, Federal agencies are required to expeditiously identify real property that can be immediately reused and redeveloped. Satisfying this objective requires the identification of real property where no hazardous substances or petroleum products, regulated by the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), were stored for one year or more, known to have been released, or disposed.more » Fort George G. Meade, CERFA, Base closure, BRAC.« less
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, W; Rao, A; Wendt, R
Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy-CT registration framework that is clinically valuable and requires no specialized equipment.« less
UMCS feasibility study for Fort George G. Meade. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
Fort George G. Meade selected eighty-three (83) buildings, from the approximately 1,500 buildings on the base to be included in the UMCS Feasibility Study. The purpose of the study is to evaluate the feasibility of replacing the existing analog based Energy Monitoring and Control System (EMCS) with a new distributed process Monitoring and Control System (UMCS).
UMCS feasibility study for Fort George G. Meade volume 1. Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
Fort George G. Meade selected 83 buildings, from the approximately 1,500 buildings on the base to be included in the UMCS Feasibility Study. The purpose of the study is to evaluate the feasibility of replacing the existing analog based Energy Monitoring and Control System (EMCS) with a new distributed process Monitoring and Control System (UMCS).
To investigate the potential for contaminants in Las Vegas Wash (LW) influent to produce effects indicative of endocrine disruption in vivo, adult male and female common carp were exposed in cages for 42-48 d at four sites and two reference locations in Lake Mead.
2005-01-01
gives a needed feel for the tragedy of war. In short, staff ride leaders must choose a limited number of tactical details and human-interest stories...adjutant, was a Seneca Indian. 315 4. Meade, often irritable, was particularly frustrated at Cold Harbor. Although Meade did not publicly voice his
In this first large-scale study of mercury (Hg) in Lake Mead, USA, the nation's largest man-
made reservoir, total-Hg concentrations were determined in the skeletal muscle of 339 fish collected during the Fall of 1998 and the Spring of 1999. Five species of fish representing ...
Publications - GMC 350 | Alaska Division of Geological & Geophysical
in the NPRA, Alaska as follows: Ikpikpuk #1 core, cuttings, and sidewall core; South Meade #1 cuttings; Brontosaurus #1 cuttings and core; Walakpa #1 core; and Walakpa #2 cuttings and core Authors follows: Ikpikpuk #1 core, cuttings, and sidewall core; South Meade #1 cuttings; Brontosaurus #1 cuttings
The astrological roots of mesmerism.
Schaffer, Simon
2010-06-01
Franz Anton Mesmer's 1766 thesis on the influence of the planets on the human body, in which he first publicly presented his account of the harmonic forces at work in the microcosm, was substantially copied from the London physician Richard Mead's early eighteenth century tract on solar and lunar effects on the body. The relation between the two texts poses intriguing problems for the historiography of medical astrology: Mesmer's use of Mead has been taken as a sign of the Vienna physician's enlightened modernity while Mead's use of astro-meteorology has been seen as evidence of the survival of antiquated astral medicine in the eighteenth century. Two aspects of this problem are discussed. First, French critics of mesmerism in the 1780s found precedents for animal magnetism in the work of Paracelsus, Fludd and other early modern writers; in so doing, they began to develop a sophisticated history for astrology and astro-meteorology. Second, the close relations between astro-meteorology and Mead's project illustrate how the environmental medical programmes emerged. The making of a history for astrology accompanied the construction of various models of the relation between occult knowledge and its contexts in the enlightenment.
AmeriFlux US-Ne2 Mead - irrigated maize-soybean rotation site
Suyker, Andy [University of Nebraska - Lincoln
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Ne2 Mead - irrigated maize-soybean rotation site. Site Description - The study site is one of three fields (all located within 1.6 km of each other) at the University of Nebraska Agricultural Research and Development Center near Mead, Nebraska. This site is irrigated with a center pivot system. Prior to the initiation of the study, the irrigated site had a 10-yr history of maize-soybean rotation under no-till. A tillage operation (disking) was done just prior to the 2001 planting to homogenize the top 0.1 m of soil, incorporate P and K fertilizers, as well as previously accumulated surface residues. Since this tillage operation, the site has been under no-till management.
View of Lake Mead and Las Vegas, Nevada area from Sklyab
1973-08-01
SL3-28-059 (July-September 1973) --- A vertical view of the Lake Mead and Las Vegas, Nevada area as photographed from Earth orbit by one of the six lenses of the Itek-furnished S190-A Multispectral Photographic Facility Experiment aboard the Skylab space station. Lake Mead is water of the Colorado River impounded by Hoover Dam. Most of the land in the picture is Nevada. However, a part of the northwest corner of Arizona can be seen. Federal agencies participating with NASA on the EREP project are the Departments of Agriculture, Commerce, Interior, the Environmental Protection Agency and the Corps of Engineers. All EREP photography is available to the public through the Department of Interior?s Earth Resources Observations Systems Data Center, Sioux Falls, South Dakota, 57198. Photo credit: NASA
NASA Astrophysics Data System (ADS)
Justet, L.; Beard, S.
2010-12-01
Hot springs and seeps discharging into Black Canyon (BC) along the Colorado River in north Colorado River Valley (CRV) support endemic riparian ecosystems in the Lake Mead National Recreation Area. Increases in groundwater development in southern NV and northwestern AZ may impact spring discharge. Sources of spring discharge in BC were evaluated using geochemical methods. Kinematic analysis and geologic mapping of structures associated with BC springs were used to evaluate structural controls on groundwater flow in BC. Geochemical analysis indicates groundwater discharge near Hoover Dam (HD) and along the faulted edge of the Boulder City Pluton is derived from Lake Mead, high δ87Sr Proterozoic or Tertiary crystalline rock and, possibly, Tertiary sedimentary rock. Reducing conditions indicated by 234U/238U and δ34S concentrations suggest the groundwater is confined and/or derived from greater depths while carbon isotopes indicate the groundwater is old. Lighter δD and δO-18, modern tritium concentrations, post-Dam U disequilibrium ages, and occurrence of anthropogenic perchlorate support the presence of a young Lake Mead component. South of the pluton, the Lake Mead component is absent. More oxidizing conditions in this part of BC, indicated by the U and S isotope concentrations, suggest the groundwater is less confined and/or derived from shallower depths compared to groundwater discharging near HD. Older apparent groundwater ages and heavier δD and δO-18 values south of the pluton indicate slower flow paths from a lower elevation or latitude source. Clarifying the nature of groundwater flow in eastern NV, the analyses indicate that hydraulic connection between the regional carbonate aquifer and BC is unlikely. Instead, the data indicate sources of BC springs are derived relatively locally in CRV and, possibly, south Lake Mead Valley. Results of the geologic and kinematic analyses indicate faults that formed from the interaction of E-W extension related to the AZ extensional corridor and NW-SE trans tension related to the Lake Mead shear zone are the main controls on groundwater flow in the vicinity of HD and Boulder City Pluton. Most groundwater in BC appears to discharge along the NW-striking Palm Tree fault that parallels the northern edge of the pluton. Supported by trends in chemistry, an alignment of similar-elevation springs along a N-S striking fault that extends the length of west BC may be a flow path for groundwater from north BC to south of the pluton. South of the pluton, dikes intrude many of the faults and appear to act as flow barriers. Groundwater in this part of BC may flow through stacked layers of brecciated volcanic rock prevalent in the area. Flow from laterally adjacent valleys into BC would have to cross a N-S structural fabric that is not favored kinematically. Existing information implies an overall absence of significant surface discharge in BC prior to construction of HD. This indicates that the head created by impoundment of the Colorado River has likely pushed old, slow moving groundwater through CRV and, possibly, south Lake Mead Valley, to the surface in BC where it discharges as springs and seeps.
Delusions of Liberty: Rethinking Democracy Promotion in American Grand Strategy
2016-06-01
States’ first debates surrounding intervention in support of democratic uprisings. Edward Mead Earle describes American sentiment in 1821: “All educated...theories regarding human or state behavior are not unconditional or absolute in 3 Edward Mead ...property and to punish those who tread upon the rights of others.40 Thomas Jefferson’s rationale of popular revolt against King George III’s rule
Collecting to Win: ISR for Strategic Effect
2014-02-13
Cryptologic History (Ft. George G. Meade , MD: National Security Agency, 1992), 1-143; Matthew M. Aid, The Secret Sentry: The Untold History of the National...United States Cryptologic History. Ft. George G. Meade , MD: National Security Agency, 1992. Norris, Pat. Spies in the Sky: Surveillance Satellites in...Submitted to the Faculty In Partial Fulfillment of the Graduation Requirements Advisor: Dr. Herbert L. Frandsen, Jr. 13 February 2014
Publications - GMC 299 | Alaska Division of Geological & Geophysical
conventional NPRA core samples from the following wells: U.S. Navy Meade T. W. #1 (600'-2366'); Husky Oil NPR Operations North Kalikpik T. W. #1 (6992.5'); and Husky Oil NPR Operations Seabee T. W. #1 (6547.6' and wells: U.S. Navy Meade T. W. #1 (600'-2366'); Husky Oil NPR Operations North Kalikpik T. W. #1 (6992.5
Linder, G.; Little, E.E.
2009-01-01
The analysis and characterization of competing risks for water resources rely on a wide spectrum of tools to evaluate hazards and risks associated with their management. For example, waters of the lower Colorado River stored in reservoirs such as Lake Mead present a wide range of competing risks related to water quantity and water quality. These risks are often interdependent and complicated by competing uses of source waters for sustaining biological resources and for supporting a range of agricultural, municipal, recreational, and industrial uses. USGS is currently conducting a series of interdisciplinary case-studies on water quality of Lake Mead and its source waters. In this case-study we examine selected constituents potentially entering the Lake Mead system, particularly endocrine disrupting chemicals (EDCs). Worldwide, a number of environmental EDCs have been identified that affect reproduction, development, and adaptive behaviors in a wide range of organisms. Many EDCs are minimally affected by current treatment technologies and occur in treated sewage effluents. Several EDCs have been detected in Lake Mead, and several substances have been identified that are of concern because of potential impacts to the aquatic biota, including the sport fishery of Lake Mead and endangered razorback suckers (Xyrauchen texanus) that occur in the Colorado River system. For example, altered biomarkers relevant to reproduction and thyroid function in fishes have been observed and may be predictive of impaired metabolism and development. Few studies, however, have addressed whether such EDC-induced responses observed in the field have an ecologically significant effect on the reproductive success of fishes. To identify potential linkages between EDCs and species of management concern, the risk analysis and characterization in this reconnaissance study focused on effects (and attendant uncertainties) that might be expressed by exposed populations. In addition, risk reduction measures that may be of interest to resource managers are considered relative to emerging contaminants in treated effluents, interdependencies among biological resources at risk, and uses of reservoir waters derived from multiple inflows of widely varying qualities. ??2009 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toraya, Tetsuo; Kida, Tetsuo; Kuyama, Atsushi
Starfish oocytes are arrested at the prophase stage of the first meiotic division in the ovary and resume meiosis by the stimulus of 1-methyladenine (1-MeAde), the oocyte maturation-inducing hormone of starfish. Putative 1-MeAde receptors on the oocyte surface have been suggested, but not yet been biochemically characterized. Immunophotoaffinity labeling, i.e., photoaffinity labeling combined with immunochemical detection, was attempted to detect unknown 1-MeAde binders including putative maturation-inducing hormone receptors in starfish oocytes. When the oocyte crude membrane fraction or its Triton X-100/EDTA extract was incubated with N{sup 6}-[6-(5-azido-2-nitrobenzoyl)aminohexyl]carboxamidomethyl-1-methyladenine and then photo-irradiated, followed by western blotting with antibody that was raised againstmore » a 1-MeAde hapten, a single band with M{sub r} of 47.5 K was detected. The band was lost when extract was heated at 100 °C. A similar 47.5 K band was detected in the crude membrane fraction of testis as well. Upon labeling with whole cells, this band was detected in immature and maturing oocytes, but only faintly in mature oocytes. As judged from these results, this 1-MeAde binder might be a possible candidate of the starfish maturation-inducing hormone receptors. - Highlights: • Synthesis of photoaffinity labeling reagents for 1-methyladenine binders of starfish. • Immunochemical detection of photoaffinity-labeled 1-methyladenine binders. • Immunophotoaffinity labeling of a 47.5 K 1-methyladenine binder in oocytes and testis. • A possible candidate of oocyte maturation-inducing hormone receptors of starfish.« less
Rosen, Michael R; Alvarez, David A; Goodbred, Steven L; Leiker, Thomas J; Patiño, Reynaldo
2010-01-01
The delineation of lateral and vertical gradients of organic contaminants in lakes is hampered by low concentrationsand nondetection of many organic compounds in water. Passive samplers (semipermeable membrane devices [SPMDs] and polar organic chemical integrative samplers [POCIS]) are well suited for assessing gradients because they can detect synthetic organic compounds (SOCs) at pg L(-1) concentrations. Semi-permeable membrane devices and POCIS were deployed in Lake Mead, at two sites in Las Vegas Wash, at four sites across Lake Mead, and in the Colorado River downstream from Hoover Dam. Concentrations of hydrophobic SOCs were highest in Las Vegas Wash downstream from waste water and urban inputs and at 8 m depth in Las Vegas Bay (LVB) where Las Vegas Wash enters Lake Mead. The distribution of hydrophobic SOCs showed a lateral distribution across 10 km of Lake Mead from LVB to Boulder Basin. To assess possible vertical gradient SOCs, SPMDs were deployed at 4-m intervals in 18 m of water in LVB. Fragrances and legacy SOCs were found at the greatest concentrations at the deepest depth. The vertical gradient of SOCs indicated that contaminants were generally confined to within 6 m of the lake bottom during the deployment interval. The high SOC concentrations, warmer water temperatures, and higher total dissolved solids concentrations at depth are indicative of a plume of Las Vegas Wash water moving along the lake bottom. The lateral and vertical distribution of SOCs is discussed in the context of other studies that have shown impaired health of fish exposed to SOCs.
Huebner, Daniel R
2012-01-01
Mind, Self, and Society, the posthumously published volume by which George Herbert Mead is primarily known, poses acute problems of interpretation so long as scholarship does not consider the actual process of its construction. This paper utilizes extensive archival correspondence and notes in order to analyze this process in depth. The analysis demonstrates that the published form of the book is the result of a consequential interpretive process in which social actors manipulated textual documents within given practical constraints over a course of time. The paper contributes to scholarship on Mead by indicating how this process made possible certain understandings of his social psychology and by relocating the materials that make up the single published text within the disparate contexts from which they were originally drawn. © 2012 Wiley Periodicals, Inc.
Beck, R.A.; Rettig, A.J.; Ivenso, C.; Eisner, Wendy R.; Hinkel, Kenneth M.; Jones, Benjamin M.; Arp, C.D.; Grosse, G.; Whiteman, D.
2010-01-01
Ice formation and breakup on Arctic rivers strongly influence river flow, sedimentation, river ecology, winter travel, and subsistence fishing and hunting by Alaskan Natives. We use time-series ground imagery ofthe Meade River to examine the process at high temporal and spatial resolution. Freezeup from complete liquid cover to complete ice cover ofthe Meade River at Atqasuk, Alaska in the fall of 2008 occurred in less than three days between 28 September and 2 October 2008. Breakup in 2009 occurred in less than two hours between 23:47 UTC on 23 May 2009 and 01:27 UTC on 24 May 2009. All times in UTC. Breakup in 2009 and 2010 was ofthe thermal style in contrast to the mechanical style observed in 1966 and is consistent with a warming Arctic. ?? 2010 Taylor & Francis.
Astronaut Carl Meade mans pilots station during trajectory control exercise
1994-09-12
STS064-22-024 (9-20 Sept. 1994) --- With a manual and lap top computer in front of him, astronaut Carl J. Meade, STS-64 mission specialist, supports operations with the Trajectory Control Sensor (TCS) aboard the Earth-orbiting space shuttle Discovery. For this exercise, Meade temporarily mans the pilot's station on the forward flight deck. The TCS is the work of a team of workers at NASA's Johnson Space Center. Data gathered during this flight was expected to prove valuable in designing and developing a sensor for use during the rendezvous and mating phases of orbiter missions to the space station. For this demonstration, the Shuttle Pointed Autonomous Research Tool for Astronomy 201 (SPARTAN 201) was used as the target vehicle during release and retrieval operations. Photo credit: NASA or National Aeronautics and Space Administration
Optimal configuration of power grid sources based on optimal particle swarm algorithm
NASA Astrophysics Data System (ADS)
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
Technical Report for Proposed Ordnance Clearance at Fort George G. Meade
1991-03-01
Potomac Group (including the Patapsco, Arundel Clay, and Patuxent Formations), the Magothy Formation and the Patuxent River terraces and associated...alluvium. The youngest geologic unit in theI stratigraphic sequence underlying Fort Meade is the Magothy Formation of Late Cretaceous age. This formation...Department of the Army, 1981). The Magothy Formation unconformably overlies the sediments of the Lower Cretaceous Potomac Group. i The formations of the
Lisa L. Stillings; Michael C. Amacher
2010-01-01
Phosphorite from the Meade Peak Phosphatic Shale member of the Permian Phosphoria Formation has been mined in southeastern Idaho since 1906. Dumps of waste rock from mining operations contain high concentrations of Se which readily leach into nearby streams and wetlands. While the most common mineralogical residence of Se in the phosphatic shale is elemental Se, Se(0...
Snyder, Darin C; Delmore, James E; Tranter, Troy; Mann, Nick R; Abbott, Michael L; Olson, John E
2012-08-01
Fractionation of the two longer-lived radioactive cesium isotopes ((135)Cs and (137)Cs) produced by above ground nuclear tests have been measured and used to clarify the dispersal mechanisms of cesium deposited in the area between the Nevada Nuclear Security Site and Lake Mead in the southwestern United States. Fractionation of these isotopes is due to the 135-decay chain requiring several days to completely decay to (135)Cs, and the 137-decay chain less than one hour decay to (137)Cs. Since the Cs precursors are gases, iodine and xenon, the (135)Cs plume was deposited farther downwind than the (137)Cs plume. Sediment core samples were obtained from the Las Vegas arm of Lake Mead, sub-sampled and analyzed for (135)Cs/(137)Cs ratios by thermal ionization mass spectrometry. The layers proved to have nearly identical highly fractionated isotope ratios. This information is consistent with a model where the cesium was initially deposited onto the land area draining into Lake Mead and the composite from all of the above ground shots subsequently washed onto Lake Mead by high intensity rain and wind storms producing a layering of Cs activity, where each layer is a portion of the composite. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Liu, Xing
2008-01-01
The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…
Army Drug Development Program. Phase 1. Clinical Testing
1981-02-01
drug administration, the subjects fasted from 2400 to 0600, at which time they were given 360 ml Sustacal (Mead Johnson product ...Such factors as time of day, meals, alcohol, other drugs, and lack of proper sleep may affect the level of drug in your blood on...Sustacal (Mead Johnson product ) containing a total of 360 calories. Subjects ingested a single oral 750 mg dose of WR 180,409•H3PO4 in
America’s Transitional Capacity: War, Systems, and Globalization
2013-05-23
alleviate human suffering. Amartya Sen supports the Wilsonian philosophy by pointing out that 44Mead, 131. 45Ibid., 103. 46Mead, 164-7. 19...E. Fiasco: the American Military Adventure in Iraq, 2003 to 2005. Reprint ed. New York, NY: Penguin Books, 2007. Sen , Amartya . Development as...they produced a large naval forces in order to expand trade markets and ensure 47Amartya Sen , Development as Freedom (New York: Anchor Books, 1999
Duchan, J F
2000-01-01
We have long treated communication and social assessment as related but separate domains. Theorizing by George Herbert Mead on "the social self" offers an alternative to this conceptual separation and a means of evaluating children's social interaction, social participation, and communication simultaneously. This article describes Mead's thinking and presents a framework for assessing children's social reciprocity, interactive stances, and role participation as they participate in everyday life contexts.
Why UW: Factoring in the Decision Point for Unconventional Warfare
2012-12-01
political, economic, ideological, and technological elements into the concept of strategy has existed since ancient times. See Edward Mead Earle...declassified CIA records. Irene Gendzier, states that “Known CIA agents Miles Copeland and Stephen Meade were directly involved in the Syria coup of 1947...State Herbert Hoover Jr. gave full approval.98 The only note of caution came later from the CIA operations chief, Frank Wisner, who insisted that no
A History of U.S. Military Conceptual Solutions to the Uncertainty of Expeditions
2016-06-10
War I, the Army appointed COL Samuel D. Rockenbach as commandant of the Tank School in Camp Meade , Maryland.4 In this role, COL Rockenbach intended to...France, led by Captain George S. Patton, Jr., which proved prolific both in the extent to which it influenced American thinking regarding tanks and...Tank In 1928, Secretary of War Dwight Davis directed the Army to establish an experimental armored force at Camp Meade , Maryland. This experimental
Autohypnosis and Trance Dance in Bali.
Haley, Jay; Richeport-Haley, Madeleine
2015-01-01
A masterpiece of historical importance, this paper recounts Jay and Madeleine Haley's trip to Bali nearly 50 years after Gregory Bateson and Margaret Mead first went there. The Haleys met several of the same individuals who greeted Bateson and Mead and made a film they entitled "Dance and Trance of Balinese Children." This is a fascinating document of a culture and society so different from our own and the technique of dance and trance used to regulate emotion and violence.
LEVELS OF SYNTHETIC MUSK COMPOUNDS IN ...
To test the ruggedness of a newly developed analytical method for synthetic musks, a 1-year monthly monitoring of synthetic musks in water and biota was conducted for LakeMead (near Las Vegas, Nevada) as well as for combined sewage-dedicated effluent streams feeding Lake Mead. Data obtained from analyses of combined effluent streams from three municipal sewage treatment plants, from the effluent-receiving lake water, and from whole carp (Cyprinus carpio) tissue, indicated bioconcentration of synthetic musks in carp (1400-4500 pg/g). That same data were evaluated for the prediction of levels of synthetic musk compounds in fish, using values from the source (sewage treatment plant effluent [STP]). This study confirmed the presence of polycyclic and nitro musks in STP effluent, Lake Mead water, and carp. The concentrations of the polycyclic and nitro musks found in Lake Mead carp were considerably lower than previous studies in Germany, other European countries, and Japan. The carp samples were found to have mostly the mono-amino-metabolites of the nitro musks and intact polycyclic musks, principally HHCB (Galaxolide®) and AHTN (Tonalide®). Finally, the determination of sufficiently high levels of Galaxolide® and 4-amino musk xylene in STP effluent may be used to infer the presence of trace levels of other classes of musk compounds in the lake water. To be presented is an overview of the chemistry, the monitoring methodology, andthe statistical evaluation of con
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-12-01
This Site Inspection Addendum Report has been prepared to address the Site Inspection portion of the Feasibility Study (FS) and Remedial Investigation/Site Inspection (RI/SI) activities at Fort George G. Meade. It has been prepared under Delivery Order No. 009 and a Change Order dated July 15, 1993, for the U.S. Army Environmental Center (USAEC), formerly known as the U.S. Army Toxic and Hazardous Materials Agency (USATHAMA). This report fulfills the requirements of deliverable ELIN A004 under Delivery Order 0009 of the Total Environmental Program Support (TEPS) contract DAAAl5-91-D-00l6. The purpose of this Site Inspection Addendum (SIA) report is to reportmore » the findings of Arthur D. Little`s SIA investigation. The overall purpose of an SI is to evaluate if chemical releases or potential contamination has occurred at suspected sites and to determine if further investigation is warranted. This study is an addendum to a previous SI and addresses data gaps remaining from or identified in that document. Six sites at Fort George G. Meade (FGGM) are included in the SIA: DPDO Salvage Yard and Transform Storage Area; DPDO Salvage Yard and Transformer Storage Area; Fire Training Area; Helicopter Hangar Area; Inactive Landfill No. 2, Ordnance Demolition Area Soldiers Lake.« less
Wang, Ying-Fang; Tsai, Perng-Jy; Chen, Chun-Wan; Chen, Da-Ren; Dai, Yu-Tung
2011-12-30
The aims of the present study were set out to measure size distributions and estimate workers' exposure concentrations of oil mist nanoparticles in three selected workplaces of the forming, threading, and heat treating areas in a fastener manufacturing plant by using a modified electrical aerosol detector (MEAD). The results were further compared with those simultaneously obtained from a nanoparticle surface area monitor (NSAM) and a scanning mobility particle sizer (SMPS) for the validation purpose. Results show that oil mist nanoparticles in the three selected process areas were formed mainly through the evaporation and condensation processes. The measured size distributions of nanoparticles were consistently in the form of uni-modal. The estimated fraction of nanoparticles deposited on the alveolar (AV) region was consistently much higher than that on the head airway (HD) and tracheobronchial (TB) regions in both number and surface area concentration bases. However, a significant difference was found in the estimated fraction of nanoparticles deposited on each individual region while different exposure metrics were used. Comparable results were found between results obtained from both NSAM and MEAD. After normalization, no significant difference can be found between the results obtained from SMPS and MEAD. It is concluded that the obtained MEAD results are suitable for assessing oil mist nanoparticle exposures. Copyright © 2011 Elsevier B.V. All rights reserved.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
1992-07-09
STS050-255-027 (25 June-9 July 1992) --- Payload specialist Eugene H. Trinh, left, and astronaut Carl J. Meade, mission specialist, go to work in the U.S. Microgravity Laboratory (USML-1) science module as the blue shift crew takes over from the red. Trinh is working with an experiment at the Drop Physics Module (DPM) and Meade prepares to monitor an experiment in the Glovebox. The two joined four other astronauts and a second scientist from the private sector for 14-days of scientific data-gathering.
A waterborne outbreak of hepatitis A in Meade County, Kentucky.
Bergeisen, G H; Hinds, M W; Skaggs, J W
1985-01-01
In November 1982, Meade County, Kentucky health officials noted a sudden increase in the incidence of hepatitis A. Using a standardized interview of 73 cases (68 serologically confirmed), and 85 controls (all negative for antibody to hepatitis A virus), the most important risk factor identified was household use of untreated water from a single spring. A dose-response relationship was found for consumption of unboiled spring water. Water samples taken from the spring during the outbreak were contaminated with fecal coliforms. PMID:3966622
Christopher Litvay; Alan Rudie; Peter Hart
2003-01-01
An Excel spreadsheet developed to solve the ion-exchange equilibrium in wood pulps has been linked by dynamic data exchange to WinGEMS and used to model the non-process elements in the hardwood bleach plant of the Mead/Westvaco Evandale mill. Pulp and filtrate samples were collected from the diffusion washers and final wash press of the bleach plant. A WinGEMS model of...
Testing a Novel 3D Printed Radiographic Imaging Device for Use in Forensic Odontology.
Newcomb, Tara L; Bruhn, Ann M; Giles, Bridget; Garcia, Hector M; Diawara, Norou
2017-01-01
There are specific challenges related to forensic dental radiology and difficulties in aligning X-ray equipment to teeth of interest. Researchers used 3D printing to create a new device, the combined holding and aiming device (CHAD), to address the positioning limitations of current dental X-ray devices. Participants (N = 24) used the CHAD, soft dental wax, and a modified external aiming device (MEAD) to determine device preference, radiographer's efficiency, and technique errors. Each participant exposed six X-rays per device for a total of 432 X-rays scored. A significant difference was found at the 0.05 level between the three devices (p = 0.0015), with the MEAD having the least amount of total errors and soft dental wax taking the least amount of time. Total errors were highest when participants used soft dental wax-both the MEAD and the CHAD performed best overall. Further research in forensic dental radiology and use of holding devices is needed. © 2016 American Academy of Forensic Sciences.
Appel, Toby A
2014-01-01
Kate Campbell Hurd-Mead (1867–1941), a leader among second-generation women physicians in America, became a pioneer historian of women in medicine in the 1930s. The coalescence of events in her personal life, the declining status of women in medicine, and the growing significance of the new and relatively open field of history of medicine all contributed to this transformation in her career. While she endeavored to become part of the community of male physicians who wrote medical history, her primary identity remained that of a “medical woman.” For Hurd-Mead, the history of women in the past not only filled a vital gap in scholarship but served practical ends that she had earlier pursued by other means—those of inspiring and advancing the careers of women physicians of the present day, promoting organizations of women physicians, and advocating for equality of opportunity in the medical profession.
Protecting the Elderly: Federal Agencies’ Role Concerning Questionable Marketing Practices.
1987-08-26
1986, the Commission conducted studies of and investigated both health -related and nonhealth-related activities affecting older Americans. For example...ADft-RS s?6 PROTECTING THE ELDERLY : FEDERAL AGENCIES’ ROLE 1’ CONCERNING QUESTIONABLE M. . (U) GENERAL ACCOUNTING OFFICE NASHINGTON DC HUMAN...the Chairman, SelectCommittee on Aging , House of Representatives August 1987 P O E T N pPROTECTING THE NELDERLY 00 Federal Agencies’ Role Concerning
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
Warehouse stocking optimization based on dynamic ant colony genetic algorithm
NASA Astrophysics Data System (ADS)
Xiao, Xiaoxu
2018-04-01
In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm
NASA Astrophysics Data System (ADS)
Anam, S.
2017-10-01
Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.
Interior search algorithm (ISA): a novel approach for global optimization.
Gandomi, Amir H
2014-07-01
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
1988-09-01
report for the Reelfoot Lake , Tennessee, study was submitted to HQUSACF on 13 November 1986. An Issue Resolution Conference was held on 6 April 1988...and 50 miles in length (north-south direction) with 1,590 square miles of drainage area at its mouth at Lake Mead, a man-made lake created by Hoover Dam...northwest part of the drainage area and flows generally south- east for about 45 miles to Lake Mead. Numerous ephemeral and generally IChief, Hydrology
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
NASA Astrophysics Data System (ADS)
Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.
2018-03-01
Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Beard, Sue; Campagna, David J.; Anderson, R. Ernest
2010-01-01
The Lake Mead fault system is a northeast-striking, 130-km-long zone of left-slip in the southeast Great Basin, active from before 16 Ma to Quaternary time. The northeast end of the Lake Mead fault system in the Virgin Mountains of southeast Nevada and northwest Arizona forms a partitioned strain field comprising kinematically linked northeast-striking left-lateral faults, north-striking normal faults, and northwest-striking right-lateral faults. Major faults bound large structural blocks whose internal strain reflects their position within a left step-over of the left-lateral faults. Two north-striking large-displacement normal faults, the Lakeside Mine segment of the South Virgin–White Hills detachment fault and the Piedmont fault, intersect the left step-over from the southwest and northeast, respectively. The left step-over in the Lake Mead fault system therefore corresponds to a right-step in the regional normal fault system.Within the left step-over, displacement transfer between the left-lateral faults and linked normal faults occurs near their junctions, where the left-lateral faults become oblique and normal fault displacement decreases away from the junction. Southward from the center of the step-over in the Virgin Mountains, down-to-the-west normal faults splay northward from left-lateral faults, whereas north and east of the center, down-to-the-east normal faults splay southward from left-lateral faults. Minimum slip is thus in the central part of the left step-over, between east-directed slip to the north and west-directed slip to the south. Attenuation faults parallel or subparallel to bedding cut Lower Paleozoic rocks and are inferred to be early structures that accommodated footwall uplift during the initial stages of extension.Fault-slip data indicate oblique extensional strain within the left step-over in the South Virgin Mountains, manifested as east-west extension; shortening is partitioned between vertical for extension-dominated structural blocks and south-directed for strike-slip faults. Strike-slip faults are oblique to the extension direction due to structural inheritance from NE-striking fabrics in Proterozoic crystalline basement rocks.We hypothesize that (1) during early phases of deformation oblique extension was partitioned to form east-west–extended domains bounded by left-lateral faults of the Lake Mead fault system, from ca. 16 to 14 Ma. (2) Beginning ca. 13 Ma, increased south-directed shortening impinged on the Virgin Mountains and forced uplift, faulting, and overturning along the north and west side of the Virgin Mountains. (3) By ca. 10 Ma, initiation of the younger Hen Spring to Hamblin Bay fault segment of the Lake Mead fault system accommodated westward tectonic escape, and the focus of south-directed shortening transferred to the western Lake Mead region. The shift from early partitioned oblique extension to south-directed shortening may have resulted from initiation of right-lateral shear of the eastern Walker Lane to the west coupled with left-lateral shear along the eastern margin of the Great Basin.
Adaptive cockroach swarm algorithm
NASA Astrophysics Data System (ADS)
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
Evaluating Water Supply and Water Quality Management Options for Las Vegas Valley
NASA Astrophysics Data System (ADS)
Ahmad, S.
2007-05-01
The ever increasing population in Las Vegas is generating huge demand for water supply on one hand and need for infrastructure to collect and treat the wastewater on the other hand. Current plans to address water demand include importing water from Muddy and Virgin Rivers and northern counties, desalination of seawater with trade- payoff in California, water banking in Arizona and California, and more intense water conservation efforts in the Las Vegas Valley (LVV). Water and wastewater in the LVV are intrinsically related because treated wastewater effluent is returned back to Lake Mead, the drinking water source for the Valley, to get a return credit thereby augmenting Nevada's water allocation from the Colorado River. The return of treated wastewater however, is a major contributor of nutrients and other yet unregulated pollutants to Lake Mead. Parameters that influence the quantity of water include growth of permanent and transient population (i.e., tourists), indoor and outdoor water use, wastewater generation, wastewater reuse, water conservation, and return flow credits. The water quality of Lake Mead and the Colorado River is affected by the level of treatment of wastewater, urban runoff, groundwater seepage, and a few industrial inputs. We developed an integrated simulation model, using system dynamics modeling approach, to account for both water quantity and quality in the LVV. The model captures the interrelationships among many variables that influence both, water quantity and water quality. The model provides a valuable tool for understanding past, present and future pathways of water and its constituents in the LVV. The model is calibrated and validated using the available data on water quantity (flows at water and wastewater treatment facilities and return water credit flow rates) and water quality parameters (TDS and phosphorus concentrations). We used the model to explore important questions: a)What would be the effect of the water transported from the northern counties on the water supply and water quality of Lake Mead? b)What would be the impact of increased reuse of wastewater on return credits? c)What would be the effect of treating runoff water on the load of nutrients to Lake Mead?
NASA Astrophysics Data System (ADS)
Li, Y.; Acharya, K.; Chen, D.; Stone, M.; Yu, Z.; Young, M.; Zhu, J.; Shafer, D. S.; Warwick, J. J.
2009-12-01
Sustained drought in the western United States since 2000 has led to a significant drop (about 35 meters) in the water level of Lake Mead, the largest reservoir by volume in United States. The drought combined with rapid urban development in southern Nevada and emergence of invasive species has threatened the water quality and ecological processes in Lake Mead. A three-dimensional hydrodynamic model, Environmental Fluid Dynamics Code (EFDC), was applied to investigate lake circulation and temperature stratification in parts of Lake Mead (Las Vegas Bay and Boulder Basin) under changing water levels. Besides the inflow from Las Vegas Wash and the Colorado River, the model considered atmospheric changes as well as the boundary conditions restricted by the operation of Hoover Dam. The model was calibrated and verified by using observed data including water level, velocity, and temperature from 2003 and 2005. The model was applied to study the hydrodynamic processes at water level 366.8 m (year 2000) and at water level 338.2 m (year 2008). The high-stage simulation described the pre-drought lake hydrodynamic processes while the low-stage simulation highlighted the drawdown impact on such processes. The results showed that both inflow and wind-driven mixing process played major roles in the thermal stratification and lake circulation in both cases. However, the atmospheric boundary played a more important role than inflow temperature on thermal stratification of Lake Mead during water level decline. Further, the thermal stratification regime and flow circulation pattern in shallow lake regions (e.g.., the Boulder Basin area) were most impacted. The temperature of the lake at the high-stage was more sensitive to inflow temperatures than at low-stage. Furthermore, flow velocities decreased with the decreasing water level due to reduction in wind impacts, particularly in shallow areas of the lake. Such changes in temperature and lake current due to present drought have a strong influence on contaminant and nutrient dynamics and ecosystem of the lake.
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
Fireworks algorithm for mean-VaR/CVaR models
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Liu, Zhifeng
2017-10-01
Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.
1994-09-09
KENNEDY SPACE CENTER, FLA. - The turbulent weather common to a Florida afternoon in the summer subsides into a serene canopy of cornflower blue, and a manmade "bird" takes flight. The Space Shuttle Discovery soars skyward from Launch Pad 39B on Mission STS-64 at 6:22:35 p.m. EDT, Sept. 9. On board are a crew of six: Commander Richard N. Richards; Pilot L. Blaine Hammond Jr.; and Mission Specialists Mark C. Lee, Carl J. Meade, Susan J. Helms and Dr. J.M. Linenger. Payloads for the flight include the Lidar In-Space Technology Experiment (LITE), the Shuttle Pointed Autonomous Research Tool for Astronomy-201 (SPARTAN-201) and the Robot Operated Material Processing System (ROMPS). Mission Specialists Lee and Meade also are scheduled to perform an extravehicular activity during the 64th Shuttle mission.
Smith, Alexander C; Cronan, John E
2014-11-01
In Escherichia coli, synthesis of the malonyl coenzyme A (malonyl-CoA) required for membrane lipid synthesis is catalyzed by acetyl-CoA carboxylase, a large complex composed of four subunits. The subunit proteins are needed in a defined stoichiometry, and it remains unclear how such production is achieved since the proteins are encoded at three different loci. Meades and coworkers (G. Meades, Jr., B. K. Benson, A. Grove, and G. L. Waldrop, Nucleic Acids Res. 38:1217-1227, 2010, doi:http://dx.doi.org/10.1093/nar/gkp1079) reported that coordinated production of the AccA and AccD subunits is due to a translational repression mechanism exerted by the proteins themselves. The AccA and AccD subunits form the carboxyltransferase (CT) heterotetramer that catalyzes the second partial reaction of acetyl-CoA carboxylase. Meades et al. reported that CT tetramers bind the central portions of the accA and accD mRNAs and block their translation in vitro. However, long mRNA molecules (500 to 600 bases) were required for CT binding, but such long mRNA molecules devoid of ribosomes seemed unlikely to exist in vivo. This, plus problematical aspects of the data reported by Meades and coworkers, led us to perform in vivo experiments to test CT tetramer-mediated translational repression of the accA and accD mRNAs. We report that increased levels of CT tetramer have no detectable effect on translation of the CT subunit mRNAs. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Smith, Alexander C.
2014-01-01
In Escherichia coli, synthesis of the malonyl coenzyme A (malonyl-CoA) required for membrane lipid synthesis is catalyzed by acetyl-CoA carboxylase, a large complex composed of four subunits. The subunit proteins are needed in a defined stoichiometry, and it remains unclear how such production is achieved since the proteins are encoded at three different loci. Meades and coworkers (G. Meades, Jr., B. K. Benson, A. Grove, and G. L. Waldrop, Nucleic Acids Res. 38:1217–1227, 2010, doi:http://dx.doi.org/10.1093/nar/gkp1079) reported that coordinated production of the AccA and AccD subunits is due to a translational repression mechanism exerted by the proteins themselves. The AccA and AccD subunits form the carboxyltransferase (CT) heterotetramer that catalyzes the second partial reaction of acetyl-CoA carboxylase. Meades et al. reported that CT tetramers bind the central portions of the accA and accD mRNAs and block their translation in vitro. However, long mRNA molecules (500 to 600 bases) were required for CT binding, but such long mRNA molecules devoid of ribosomes seemed unlikely to exist in vivo. This, plus problematical aspects of the data reported by Meades and coworkers, led us to perform in vivo experiments to test CT tetramer-mediated translational repression of the accA and accD mRNAs. We report that increased levels of CT tetramer have no detectable effect on translation of the CT subunit mRNAs. PMID:25157077
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
Honey Bees Inspired Optimization Method: The Bees Algorithm.
Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-11-06
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.
AmeriFlux US-Ne1 Mead - irrigated continuous maize site
Suyker, Andy [University of Nebraska - Lincoln
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Ne1 Mead - irrigated continuous maize site. Site Description - The study site is one of three fields (all located within 1.6 km of each other) at the University of Nebraska Agricultural Research and Development Center near Mead, Nebraska. This site is irrigated with a center pivot system. Prior to the initiation of the study, the irrigated site had a 10-yr history of maize-soybean rotation under no-till. A tillage operation (disking) was done just prior to the 2001 planting to homogenize the top 0.1 m of soil, incorporate P and K fertilizers, as well as previously accumulated surface residues. Since the tillage operation, the site has been under no-till management until the harvest of 2005. Following harvest, a conservation-plow tillage operation was initiated where a small amount of N fertilizer is sprayed on the residue immediately prior to the plow operation. Approximately 1/3 of the crop residue is left on the surface. The post-harvest conservation-plow operation continues as the current practice.
Recent vegetation changes along the Colorado River between Glen Canyon Dam and Lake Mead, Arizona
Turner, Raymond Marriner; Karpiscak, Martin M.
1980-01-01
Vegetation changes in the canyon of the Colorado River between Glen Canyon Dam and Lake Mead were studied by comparing photographs taken prior to completion of Glen Canyon Dam in 1963 with photographs taken afterwards at the same sites. In general, the older pictures show an absence of riparian plants along the banks of the river. The newer photographs of each pair were taken in 1972 through 1976 and reveal an increased density of many plant species. Exotic species, such as saltcedar and camel-thorn, and native riparian plants such as sandbar willow, arrowweed, desert broom and cattail, now form a new riparian community along much of the channel of the Colorado River between Glen Canyon Dam and Lake Mead. The matched photographs also reveal that changes have occurred in the amount of sand and silt deposited along the banks. Detailed maps are presented showing distribution of 25 plant species along the reach of the Colorado River studied. Data showing changes in the hydrologic regime since completion of Glen Canyon Dam are presented. (Kosco-USGS)
First solar radio spectrometer deployed in Scotland, UK
NASA Astrophysics Data System (ADS)
Monstein, Christian
2012-10-01
A new Callisto solar radio spectrometer system has recently been installed and set into operation at Acre Road Observatory, a facility of University of Glasgow, Scotland UK. There has been an Observatory associated with Glasgow University since 1757, and they presently occupy two different sites. The main observatory ('Acre Road') is close to the Garscube Estate on the outskirts of the city of Glasgow. The outstation ('Cochno', housing the big 20 inch Grubb Parsons telescope) is located farther out at a darker site in the Kilpatrick Hills. The Acre Road Observatory comprises teaching and research labs, a workshop, the main dome housing the 16 inch Meade, the solar dome, presently housing the 12 inch Meade, a transit house containing the transit telescope, a 3m HI radio telescope and a 408 MHz pulsar telescope. They also have 10 and 8 inch Meade telescopes and several 5 inch Celestron instruments. There is a small planetarium beneath the solar dome. The new Callisto instrument is mainly foreseen for scientific solar burst observations as well as for student projects and for 'bad-weather' outreach activities.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Queue and stack sorting algorithm optimization and performance analysis
NASA Astrophysics Data System (ADS)
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
NASA Astrophysics Data System (ADS)
Shirazi, Abolfazl
2016-10-01
This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
Production scheduling with ant colony optimization
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu
2017-10-01
The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.
Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm
NASA Astrophysics Data System (ADS)
Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda
2017-04-01
Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.
Ten Statisticians and Their Impacts for Psychologists.
Wright, Daniel B
2009-11-01
Although psychologists frequently use statistical procedures, they are often unaware of the statisticians most associated with these procedures. Learning more about the people will aid understanding of the techniques. In this article, I present a list of 10 prominent statisticians: David Cox, Bradley Efron, Ronald Fisher, Leo Goodman, John Nelder, Jerzy Neyman, Karl Pearson, Donald Rubin, Robert Tibshirani, and John Tukey. I then discuss their key contributions and impact for psychology, as well as some aspects of their nonacademic lives. © 2009 Association for Psychological Science.
2015-08-01
McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
Firefly Algorithm, Lévy Flights and Global Optimization
NASA Astrophysics Data System (ADS)
Yang, Xin-She
Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Lévy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
STS-64 Extravehicular activity (EVA) training view in WETF
1994-08-10
S94-39775 (August 1994) --- Astronaut Carl J. Meade, STS-64 mission specialist, listens to ground monitors during a simulation of a spacewalk scheduled for his September mission. Meade, who shared the rehearsal in the Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F) pool with crewmate astronaut Mark C. Lee, is equipped with a training version of new extravehicular activity (EVA) hardware called the Simplified Aid for EVA Rescue (SAFER) system. The hardware includes a mobility-aiding back harness and a chest-mounted hand control module. Photo credit: NASA or National Aeronautics and Space Administration
STS-64 Extravehicular activity (EVA) training view in WETF
1994-08-10
S94-39762 (August 1994) --- Astronaut Carl J. Meade, STS-64 mission specialist, listens to ground monitors prior to a simulation of a spacewalk scheduled for his September mission. Meade, who shared the rehearsal in Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F) pool with crewmate astronaut Mark C. Lee (out of frame), is equipped with a training version of new extravehicular activity (EVA) hardware called the Simplified Aid for EVA Rescue (SAFER) system. The hardware includes a mobility-aiding back harness and a chest-mounted hand control module. Photo credit: NASA or National Aeronautics and Space Administration
Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin
2017-09-01
Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.
Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.
Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860
A Comparative Study of Optimization Algorithms for Engineering Synthesis.
1983-03-01
the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xiaobiao; Safranek, James
2014-09-01
Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.
Wang, Haizhou; Song, Mingzhou
2011-12-01
The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Miller, Todd S.; Bugliosi, Edward F.; Reddy, James E.
2008-01-01
The Meads Creek valley encompasses 70 square miles of predominantly forested uplands in the upper Susquehanna River drainage basin. The valley, which was listed as a Priority Waterbody by the New York State Department of Environmental Conservation in 2004, is prone to periodic flooding, mostly in its downstream end, where development is occurring most rapidly. Hydraulic characteristics of the unconsolidated valley-fill aquifer were evaluated, and seepage rates in losing and gaining tributaries were calculated or estimated, in an effort to delineate the aquifer geometry and identify the factors that contribute to flooding. Results indicated that (1) Meads Creek gained about 61 cubic feet of flow per second (about 6.0 cubic feet per second per mile of stream channel) from ground-water discharge and inflow from tributaries in its 10.2-mile reach between the northernmost and southernmost measurement sites; (2) major tributaries in the northern part of the valley are not significant sources of recharge to the aquifer; and (3) major tributaries in the central and southern part of the valley provide recharge to the aquifer. The ground-water portion of streamflow in Meads Creek (excluding tributary inflow) was 11.3 cubic feet per second (ft3/s) in the central part of the valley and 17.2 ft3/s in the southern part - a total of 28.5 ft3/s. Ground-water levels were measured in 29 wells finished in unconfined deposits for construction of a potentiometric-surface map to depict directions of ground-water flow within the valley. In general, ground water flows from the edges of the valley toward Meads Creek and ultimately discharges to it. The horizontal hydraulic gradient for the entire 12-mile-long aquifer averages about 30 feet per mile, whereas the gradient in the southern fourth of the valley averages about half that - about 17 feet per mile. A water budget for the aquifer indicated that 28 percent of recharge was derived from precipitation that falls on the aquifer, 32 percent was from losing reaches of tributaries, 38 percent was unchanneled flow from hillsides that slope toward the valley (this estimate includes runoff and shallow ground-water inflow from till and bedrock), and the remaining 2 percent was from deep ground-water inflow from till and bedrock to the sides and bottom of the aquifer. Nearly all (94 percent) of the water discharged from the aquifer is equivalent to the streamflow gain in Meads Creek; the remaining 6 percent discharges as deep outflow to unconsolidated deposits in the Cohocton River valley. Several characteristics of the Meads Creek valley may contribute to flooding in the downstream area: (1) the southward decrease in the ground-water gradient impedes the ability of the aquifer to transmit water southward and can cause water levels to rise, (2) a high water table, typically only 5 to 10 feet below land surface, results in little storage capacity to absorb water from large storms, (3) a downstream narrowing of the valley impedes the southward flow of ground water and can cause water levels to rapidly rise during periods of prolonged or heavy precipitation, and (4) the upland slopes (till-covered bedrock) produce rapid runoff that recharges the aquifer. The combined effect of these conditions limits the ability of the aquifer to transmit sudden, large increases in recharge from precipitation and thereby provides a high potential for flooding in the southern third of the valley.
First Fourteen Years of Lake Mead
Thomas, Harold E.
1954-01-01
This circular summarizes the results of recent studies of Lake Mead and its environs. Area-capacity tables, prepared on the basis of a hydrographic survey of the lake in 1948-49, show that the capacity of the reservoir was reduced 4.9 percent during the first 14 years after Hoover Dam was completed, but the usable capacity was reduced only 3.2 percent. Practically all of this reduction was caused by accumulation of sediment in the reservoir. Studies of inflow and outflow indicate that the reservoir has a total storage capacity about 12 percent greater than that shown by the area-capacity table, because of 'bank' storage, or ground-water storage in the bottom and sides of the reservoir. Thus the total capacity in 1949 was greater than the quantity shown by the original area-capacity table, even though large quantities of sediment had been deposited in the reservoir during the 14 years. According to computations of the volume and weight of the accumulated sediment, about 2,000 million tons were deposited in the reservoir by the Colorado River in 14 years; this is within 2 percent of the amount calculated from measurements of the suspended sediment carried by the in flowing rivers. It is estimated that the sediment capacity of the reservoir, when filled to the level of the permanent spillway crest, is about 75,000 million tons. The sediment contributed by the Colorado River averages about 45 percent sand and 55 percent silt and clay. If the sediment carried by the river in the years 1926-50 represents the long-term average rate of accumulation in Lake Mead, it will be a century before the sediment at the dam reaches the level of the lowest gates in the intake towers, and more than 4 centuries before the reservoir is filled with sediment to the level of the permanent spillway crest. The rate of sedimentation since the first year of Lake Mead (1935) has been about 20 percent lower, and if that rate continues in the future, the life of the reservoir will be correspondingly greater. Construction of upstream reservoirs to capture some of the inflowing sediment, or transportation of sediment in the outflow through Hoover Dam, would also increase the life of the reservoir. In the first 12 years of Lake Mead, the dissolved mineral matter in the outflowing water was significantly greater than the average in the in flowing water, owing in part to solution of gypsum and rock salt from the bed of the reservoir. Currently the increased dissolved solids in the outflowing water can be accounted for almost entirely by evaporation from the reservoir, which is about 5 fo 7 percent of the annual inflow. The water from Lake Mead is habitually of better quality than that diverted from the river for irrigation prior to regulation by Hoover Dam, because it represents an average of the poor water of low stages and the excellent water from melting snow. Geodetic surveys of the Lake Mead area show that the weight of water has caused subsidence of the earth's crust amounting to about 120 millimeter at Hoover Dam, and an even greater amount in the principal area of storage in the reservoir.
Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di
2015-01-01
In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164
Modified artificial bee colony algorithm for reactive power optimization
NASA Astrophysics Data System (ADS)
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-05-01
Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.
Caldwell, Timothy J; Rosen, Michael R.; Chandra, Sudeep; Acharya, Kumud; Caires, Andrea M; Davis, Clinton J.; Thaw, Melissa; Webster, Daniel M.
2015-01-01
Invasive quagga (Dreissena bugnesis) and zebra (Dreissena ploymorpha) mussels have rapidly spread throughout North America. Understanding the relationships between environmental variables and quagga mussels during the early stages of invasion will help management strategies and allow researchers to predict patterns of future invasions. Quagga mussels were detected in Lake Mead, NV/AZ in 2007, we monitored early invasion dynamics in 3 basins (Boulder Basin, Las Vegas Bay, Overton Arm) bi-annually from 2008-2011. Mean quagga density increased over time during the first year of monitoring and stabilized for the subsequent two years at the whole-lake scale (8 to 132 individuals·m-2, geometric mean), in Boulder Basin (73 to 875 individuals·m-2), and in Overton Arm(2 to 126 individuals·m-2). In Las Vegas Bay, quagga mussel density was low (9 to 44 individuals·m-2), which was correlated with high sediment metal concentrations and warmer (> 30°C) water temperatures associated with that basin. Carbon content in the sediment increased with depth in Lake Mead and during some sampling periods quagga density was also positively correlated with depth, but more research is required to determine the significance of this interaction. Laboratory growth experiments suggested that food quantity may limit quagga growth in Boulder Basin, indicating an opportunity for population expansion in this basin if primary productivity were to increase, but was not the case in Overton Arm. Overall quagga mussel density in Lake Mead is highly variable and patchy, suggesting that temperature, sediment size, and sediment metal concentrations, and sediment carbon content all contribute to mussel distribution patterns. Quagga mussel density in the soft sediment of Lake Mead expanded during initial colonization, and began to stabilize approximately 3 years after the initial invasion.
Water-quality characteristics of stormwater runoff in Rapid City, South Dakota, 2008-14
Hoogestraat, Galen K.
2015-01-01
For the Arrowhead and Meade-Hawthorne sites, event-mean concentrations typically exceeded the TSS and bacteria beneficial-use criteria for Rapid Creek by 1–2 orders of magnitude. Comparing the two drainage basins, median TSS event-mean concentrations were more than two times greater at the Meade-Hawthorne outlet (520 milligrams per liter) than the Arrowhead outlet (200 milligrams per liter). Median fecal coliform bacteria event-mean concentrations also were greater at the Meade-Hawthorne outlet site (30,000 colony forming units per 100 milliliters) than the Arrowhead outlet site (17,000 colony forming units per 100 milliliters). A comparison to relevant standards indicates that stormwater runoff from the Downtown drainage basin exceeded criteria for bacteria and TSS, but concentrations generally were below standards for nutrients and metals. Stormwater-quality conditions from the Downtown drainage basin outfalls were similar to or better than stormwater-quality conditions observed in the Arrowhead and Meade-Hawthorne drainage basins. Three wetland channels located at the outlet of the Downtown drainage basin were evaluated for their pollutant reduction capability. Mean reductions in TSS and lead concentrations were greater than 40 percent for all three wetland channels. Total nitrogen, phosphorus, copper, and zinc concentrations also were reduced by at least 20 percent at all three wetlands. Fecal coliform bacteria concentrations typically were reduced by about 21 and 36 percent at the 1st and 2nd Street wetlands, respectively, but the reduction at the 3rd Street wetland channel was nearly zero percent. Total wetland storage volume affected pollutant reductions because TSS, phosphorus, and ammonia reductions were greatest in the wetland with the greatest volume. Chloride concentrations typically increased from inflow to outflow at the 2nd and 3rd Street wetland channels.
A new optimized GA-RBF neural network algorithm.
Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.
Metaheuristic Optimization and its Applications in Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).
VDLLA: A virtual daddy-long legs optimization
NASA Astrophysics Data System (ADS)
Yaakub, Abdul Razak; Ghathwan, Khalil I.
2016-08-01
Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.
Rani, R Ranjani; Ramyachitra, D
2016-12-01
Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
Privacy Preservation in Distributed Subgradient Optimization Algorithms.
Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng
2017-07-31
In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.
Geologic map of the Mead quadrangle (V-21), Venus
Campbell, Bruce A.; Clark, David A.
2006-01-01
The Magellan spacecraft orbited Venus from August 10, 1990, until it plunged into the Venusian atmosphere on October 12, 1994. Magellan Mission objectives included (1) improving the knowledge of the geological processes, surface properties, and geologic history of Venus by analysis of surface radar characteristics, topography, and morphology and (2) improving the knowledge of the geophysics of Venus by analysis of Venusian gravity. The Mead quadrangle (V-21) of Venus is bounded by lat 0 deg and 25 deg N., long 30 deg and 60 deg E. This quadrangle is one of 62 covering Venus at 1:5,000,000 scale. Named for the largest crater on Venus, the quadrangle is dominated by effusive volcanic deposits associated with five major coronae in eastern Eistla Regio (Didilia, Pavlova, Calakomana, Isong, and Ninmah), corona-like tectonic features, and Disani Corona. The southern extremity of Bell Regio, marked by lava flows from Nyx Mons, north of the map area, forms the north-central part of the quadrangle. The shield volcanoes Kali, Dzalarhons, and Ptesanwi Montes lie south and southwest of the large corona-related flow field. Lava flows from sources east of Mead crater flood low-lying areas along the east edge of the quadrangle.
Particle swarm optimization: an alternative in marine propeller optimization?
NASA Astrophysics Data System (ADS)
Vesting, F.; Bensow, R. E.
2018-01-01
This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.
Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm
Yang, Zhang; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications
USDA-ARS?s Scientific Manuscript database
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Hybrid cryptosystem RSA - CRT optimization and VMPC
NASA Astrophysics Data System (ADS)
Rahmadani, R.; Mawengkang, H.; Sutarman
2018-03-01
Hybrid cryptosystem combines symmetric algorithms and asymmetric algorithms. This combination utilizes speeds on encryption/decryption processes of symmetric algorithms and asymmetric algorithms to secure symmetric keys. In this paper we propose hybrid cryptosystem that combine symmetric algorithms VMPC and asymmetric algorithms RSA - CRT optimization. RSA - CRT optimization speeds up the decryption process by obtaining plaintext with dp and p key only, so there is no need to perform CRT processes. The VMPC algorithm is more efficient in software implementation and reduces known weaknesses in RC4 key generation. The results show hybrid cryptosystem RSA - CRT optimization and VMPC is faster than hybrid cryptosystem RSA - VMPC and hybrid cryptosystem RSA - CRT - VMPC. Keyword : Cryptography, RSA, RSA - CRT, VMPC, Hybrid Cryptosystem.
Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley
2013-07-08
The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT
2017-01-01
Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020
Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.
Nie, Xiaohua; Wang, Wei; Nie, Haoyao
2017-01-01
Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.
New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems
Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing
2017-01-01
Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425
Improved mine blast algorithm for optimal cost design of water distribution systems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon
2015-12-01
The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.
Linear antenna array optimization using flower pollination algorithm.
Saxena, Prerna; Kothari, Ashwin
2016-01-01
Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
A Library of Optimization Algorithms for Organizational Design
2005-01-01
N00014-98-1-0465 and #N00014-00-1-0101 A Library of Optimization Algorithms for Organizational Design Georgiy M. Levchuk Yuri N. Levchuk Jie Luo...E-mail: Krishna@engr.uconn.edu Abstract This paper presents a library of algorithms to solve a broad range of optimization problems arising in the...normative design of organizations to execute a specific mission. The use of specific optimization algorithms for different phases of the design process
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
First-order convex feasibility algorithms for x-ray CT
Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan
2013-01-01
Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295
A hybrid Jaya algorithm for reliability-redundancy allocation problems
NASA Astrophysics Data System (ADS)
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
2018-04-01
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.
A global optimization algorithm inspired in the behavior of selfish herds.
Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián
2017-10-01
In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
NASA Astrophysics Data System (ADS)
Knypiński, Łukasz
2017-12-01
In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.
NASA Astrophysics Data System (ADS)
Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.
2017-12-01
Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.
Load Frequency Control of AC Microgrid Interconnected Thermal Power System
NASA Astrophysics Data System (ADS)
Lal, Deepak Kumar; Barisal, Ajit Kumar
2017-08-01
In this paper, a microgrid (MG) power generation system is interconnected with a single area reheat thermal power system for load frequency control study. A new meta-heuristic optimization algorithm i.e. Moth-Flame Optimization (MFO) algorithm is applied to evaluate optimal gains of the fuzzy based proportional, integral and derivative (PID) controllers. The system dynamic performance is studied by comparing the results with MFO optimized classical PI/PID controllers. Also the system performance is investigated with fuzzy PID controller optimized by recently developed grey wolf optimizer (GWO) algorithm, which has proven its superiority over other previously developed algorithm in many interconnected power systems.
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
Jiang, Ailian; Zheng, Lihong
2018-03-29
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.
2018-01-01
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067
Pei, Yan
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
Investigations of quantum heuristics for optimization
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Jian; Gan, Yang
2018-04-01
The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.
[George Herbert Mead. Thought as the conversation of interior gestures].
Quéré, Louis
2010-01-01
For George Herbert Mead, thinking amounts to holding an "inner conversation of gestures ". Such a conception does not seem especially original at first glance. What makes it truly original is the "social-behavioral" approach of which it is a part, and, particularly, two ideas. The first is that the conversation in question is a conversation of gestures or attitudes, and the second, that thought and reflexive intelligence arise from the internalization of an external process supported by the social mechanism of communication: that of conduct organization. It imports then to understand what distinguishes such ideas from those of the founder of behavioral psychology, John B. Watson, for whom thinking amounts to nothing other than subvocal speech.
Study program for encapsulation materials interface for low cost silicon solar array
NASA Technical Reports Server (NTRS)
Kaelble, D. H.; Mansfeld, F. B.; Lunsden, J. B., III; Leung, C.
1980-01-01
An atmospheric corrosion model was developed and verified by five months of corrosion rate and climatology data acquired at the Mead, Nebraska LSA test site. Atmospheric corrosion rate monitors (ACM) show that moisture condensation probability and ionic conduction at the corroding surface or interface are controlling factors in corrosion rate. Protection of the corroding surface by encapsulant was shown by the ACM recordings to be maintained, independent of climatology, over the five months outdoor exposure period. The macroscopic corrosion processes which occur at Mead are shown to be reproduced in the climatology simulator. Controlled experiments with identical moisture and temperature aging cycles show that UV radiation causes corrosion while UV shielding inhibits LSA corrosion.
Multiobjective Optimization Using a Pareto Differential Evolution Approach
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.
Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.
Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie
2018-06-12
Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.
Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, M. Y.
1986-01-01
This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Hull Form Design and Optimization Tool Development
2012-07-01
global minimum. The algorithm accomplishes this by using a method known as metaheuristics which allows the algorithm to examine a large area by...further development of these tools including the implementation and testing of a new optimization algorithm , the improvement of a rapid hull form...under the 2012 Naval Research Enterprise Intern Program. 15. SUBJECT TERMS hydrodynamic, hull form, generation, optimization, algorithm
Impact of Chaos Functions on Modern Swarm Optimizers.
Emary, E; Zawbaa, Hossam M
2016-01-01
Exploration and exploitation are two essential components for any optimization algorithm. Much exploration leads to oscillation and premature convergence while too much exploitation slows down the optimization algorithm and the optimizer may be stuck in local minima. Therefore, balancing the rates of exploration and exploitation at the optimization lifetime is a challenge. This study evaluates the impact of using chaos-based control of exploration/exploitation rates against using the systematic native control. Three modern algorithms were used in the study namely grey wolf optimizer (GWO), antlion optimizer (ALO) and moth-flame optimizer (MFO) in the domain of machine learning for feature selection. Results on a set of standard machine learning data using a set of assessment indicators prove advance in optimization algorithm performance when using variational repeated periods of declined exploration rates over using systematically decreased exploration rates.
PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization
Chen, Shuangqing; Wei, Lixin; Guan, Bing
2018-01-01
Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Zhu, Zhouquan
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
Application of Improved APO Algorithm in Vulnerability Assessment and Reconstruction of Microgrid
NASA Astrophysics Data System (ADS)
Xie, Jili; Ma, Hailing
2018-01-01
Artificial Physics Optimization (APO) has good global search ability and can avoid the premature convergence phenomenon in PSO algorithm, which has good stability of fast convergence and robustness. On the basis of APO of the vector model, a reactive power optimization algorithm based on improved APO algorithm is proposed for the static structure and dynamic operation characteristics of microgrid. The simulation test is carried out through the IEEE 30-bus system and the result shows that the algorithm has better efficiency and accuracy compared with other optimization algorithms.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
Crustal structure between Lake Mead, Nevada, and Mono Lake, California
Johnson, Lane R.
1964-01-01
Interpretation of a reversed seismic-refraction profile between Lake Mead, Nevada, and Mono Lake, California, indicates velocities of 6.15 km/sec for the upper layer of the crust, 7.10 km/sec for an intermediate layer, and 7.80 km/sec for the uppermost mantle. Phases interpreted to be reflections from the top of the intermediate layer and the Mohorovicic discontinuity were used with the refraction data to calculate depths. The depth to the Moho increases from about 30 km near Lake Mead to about 40 km near Mono Lake. Variations in arrival times provide evidence for fairly sharp flexures in the Moho. Offsets in the Moho of 4 km at one point and 2 1/2 km at another correspond to large faults at the surface, and it is suggested that fracture zones in the upper crust may displace the Moho and extend into the upper mantle. The phase P appears to be an extension of the reflection from the top of the intermediate layer beyond the critical angle. Bouguer gravity, computed for the seismic model of the crust, is in good agreement with the measured Bouguer gravity. Thus a model of the crustal structure is presented which is consistent with three semi-independent sources of geophysical data: seismic-refraction, seismic-reflection, and gravity.
Inhibitory mechanism of l-glutamic acid on spawning of the starfish Patiria (Asterina) pectinifera.
Mita, Masatoshi
2017-03-01
l-Glutamic acid was previously identified as an inhibitor of spawning in the starfish Patiria (Asterina) pectinifera; this study examined how l-glutamic acid works. Oocyte release from ovaries of P. pectinifera occurred after germinal vesicle breakdown (GVBD) and follicular envelope breakdown (FEBD) when gonads were incubated ex vivo with either relaxin-like gonad-stimulating peptide (RGP) or 1-methyladenine (1-MeAde). l-Glutamic acid blocked this spawning phenotype, causing the mature oocytes to remain within the ovaries. Neither RGP-induced 1-MeAde production in ovarian follicle cells nor 1-MeAde-induced GVBD and FEBD was affected by l-glutamic acid. l-Glutamic acid may act through metabotropic receptors in the ovaries to inhibit spawning, as l-(+)-2-amino-4-phosphonobutyric acid, an agonist for metabotropic glutamate receptors, also inhibited spawning induced by 1-MeAde. Application of acetylcholine (ACH) to ovaries under inhibitory conditions with l-glutamic acid, however, brought about spawning, possibly by inducing contraction of the ovarian wall to discharge mature oocytes from the ovaries concurrently with GVBD and FEBD. Thus, l-glutamic acid may inhibit ACH secretion from gonadal nerve cells in the ovary. Mol. Reprod. Dev. 84: 246-256, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding
2018-04-01
The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
A new improved artificial bee colony algorithm for ship hull form optimization
NASA Astrophysics Data System (ADS)
Huang, Fuxin; Wang, Lijue; Yang, Chi
2016-04-01
The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-03-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-01-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060
Lin, Kuan-Cheng; Hsieh, Yi-Hsiu
2015-10-01
The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.
A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.
Sun, Tao; Xu, Ming-Hai
2017-01-01
Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.
An improved grey wolf optimizer algorithm for the inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui
2018-05-01
The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Houseknecht, David W.; Bird, Kenneth J.; O'Sullivan, Paul
2011-01-01
A broad, post-mid-Cretaceous uplift is defined in the northern National Petroleum Reserve in Alaska (NPRA) by regional truncation of Cretaceous strata, thermal maturity patterns, and amounts of exhumation estimated from sonic logs. Apatite fission-track (AFT) analysis of samples from three wells (South Meade No. 1, Topagoruk No. 1, and Ikpikpuk No. 1) across the eastern flank of the uplift indicates Tertiary cooling followed by Quaternary heating. Results from all three wells indicate that cooling, presumably caused by uplift and erosion, started about 75-65 Ma (latest Cretaceous-earliest Tertiary) and continued through the Tertiary Period. Data from South Meade indicate more rapid cooling after about 35-15 Ma (latest Eocene-middle Miocene) followed by a significant increase in subsurface temperature during the Quaternary, probably the result of increased heat flow. Data from Topagoruk and Ikpikpuk include subtle evidence of accelerated cooling starting in the latest Eocene-middle Miocene and possible evidence of increased temperature during the Quaternary. Subsurface temperature perturbations related to the insulating effect of permafrost may have been responsible for the Quaternary temperature increase at Topagoruk and Ikpikpuk and may have been a contributing factor at South Meade. Multiple lines of geologic evidence suggest that the magnitude of exhumation resulting from uplift and erosion is 5,000-6,500 ft at South Meade, 4,000-5,500 ft at Topagoruk, and 2,500-4,000 ft at Ikpikpuk. The results from these wells help to define the broad geometry of the uplift, which increases in magnitude from less than 1,000 ft at the Colville River delta to perhaps more than 7,000 ft along the northwestern coast of NPRA, between Point Barrow and Peard Bay. Neither the origin nor the offshore extent of the uplift, west and north of the NPRA coast, have been determined.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-08-20
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ , where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.
Singh, Narinder; Singh, S B
2017-01-01
A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.
Laney, R.L.
1981-01-01
The study is a geohydrologic reconnaissance of about 170 square miles in the Lake Mead National Recreation Area from Las Vegas Wash to Opal Mountain, Nevada. The study is one of a series that describes the geohydrology of the recreation area and that indentifies areas where water supplies can be developed. Precipitation in this arid area is about 5 inches per year. Streamflow is seasonal and extremely variable except for that in the Colorado River, which adjoins the area. Pan evaporation is more than 20 times greater than precipitation; therefore, regional ground-water supplies are meager except near the Colorado River, Lake Mead, and Lake Mohave. Large ground-water supplies can be developed near the river and lakes, and much smaller supplies may be obtained in a few favorable locations farther from the river and lakes. Ground water in most of the areas probably contains more than 1,000 milligrams per liter of dissolved solids, but water that contains less than 1,000 milligrams per liter of dissolved solids can be obtained within about 1 mile of the lakes. Crystalline rocks of metamorphic, intrusive and volcanic origin crop out in the area. These rocks are overlain by conglomerate and mudstone of the Muddy Creek Formation, gravel and conglomerate of the older alluvium, and sand and gravel of the Chemehuevi Formation and younger alluvium. The crystalline rocks, where sufficiently fractured, yield water to springs and would yield small amounts of water to favorably located wells. The poorly cemented and more permeable beds of the older alluvium, Chemehuevi Formation, and younger alluvium are the better potential aquifers, particularly along the Colorado River and Lakes Mead and Mohave. Thermal springs in the gorge of the Colorado River south of Hoover Dam discharge at least 2,580 acre-feet per year of water from the volcanic rocks and metamorphic and plutonic rocks. The discharge is much greater than could be infiltrated in the drainage basin above the springs. Transbasin movement of ground water probably occurs , and perhaps the larger part of the spring discharge is underflow from Eldorado Valley. The more favorable sites for ground-water development are along the shores of Lakes Mead and Mohave and are the Fire Mountain, Opal Mountain to Aztec Wash, and Hemenway Wash sites. Wells yielding several hundred gallons per minute of water of acceptable chemical quality can be developed at these sites. (USGS)
A Novel Hybrid Firefly Algorithm for Global Optimization.
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.
A Novel Hybrid Firefly Algorithm for Global Optimization
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
2016-01-01
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869
A preliminary study to metaheuristic approach in multilayer radiation shielding optimization
NASA Astrophysics Data System (ADS)
Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir
2018-01-01
Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.
NASA Astrophysics Data System (ADS)
Kazemzadeh Azad, Saeid
2018-01-01
In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.
Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin
2017-06-14
The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt
2017-09-01
A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.
Diffraction-limited lucky imaging with a 12" commercial telescope
NASA Astrophysics Data System (ADS)
Baptista, Brian J.
2014-08-01
Here we demonstrate a novel lucky imaging camera which is designed to produce diffraction-limited imaging using small telescopes similar to ones used by many academic institutions for outreach and/or student training. We present a design that uses a Meade 12" SCT paired with an Andor iXon fast readout EMCCD. The PSF of the telescope is matched to the pixel size of the EMCCD by adding a simple, custom-fabricated, intervening optical system. We demonstrate performance of the system by observing both astronomical and terrestrial targets. The astronomical application requires simpler data reconstruction techniques as compared to the terrestrial case. We compare different lucky imaging registration and reconstruction algorithms for use with this imager for both astronomical and terrestrial targets. We also demonstrate how this type of instrument would be useful for both undergraduate and graduate student training. As an instructional aide, the instrument can provide a hands-on approach for teaching instrument design, standard data reduction techniques, lucky imaging data processing, and high resolution imaging concepts.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
NASA Astrophysics Data System (ADS)
Fu, Liyue; Song, Aiguo
2018-02-01
In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.
A novel metaheuristic for continuous optimization problems: Virus optimization algorithm
NASA Astrophysics Data System (ADS)
Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue
2016-01-01
A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.
NASA Astrophysics Data System (ADS)
Aydogdu, Ibrahim
2017-03-01
In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO
Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.
Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.
Research on cutting path optimization of sheet metal parts based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.
2017-09-01
In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
2013-01-01
intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram
Biocontrol of Pathogens in the Meat Chain
NASA Astrophysics Data System (ADS)
Burgess, Catherine M.; Rivas, Lucia; McDonnell, Mary J.; Duffy, Geraldine
Bacterial foodborne zoonotic diseases are of major concern, impacting public health and causing economic losses for the agricultural-food sector and the wider society. In the United States (US) alone foodborne illness from pathogens is responsible for 76 million cases of illnesses each year (Mead et al., 1999). Salmonella, Campylobacter jejuni and Enterohaemorraghic Escherichia coli (EHEC; predominately serotype O157:H7) and Listeria monocytogenes are the most predominant foodborne bacterial pathogens reported in the developed world (United States Department of Agriculture, 2001). The importance of meat and meat products as a vehicle of foodborne zoonotic pathogens cannot be underestimated (Center for Disease Control, 2006; Gillespie, O’Brien, Adak, Cheasty, & Willshaw, 2005; Mazick, Ethelberg, Nielsen, Molbak, & Lisby, 2006; Mead et al., 2006).
NASA Technical Reports Server (NTRS)
Gaines, G. B.; Thomas, R. E.; Noel, G. T.; Shilliday, T. S.; Wood, V. E.; Carmichael, D. C.
1979-01-01
Potential long-term degradation modes for the two types of modules in the Mead array were determined and judgments were made as to those environmental stresses and combinations of stresses which accelerate the degradation of the power output. Hierarchical trees representing the severity of effects of stresses (test conditions) on eleven individual degradation modes were constructed and were pruned of tests judged to be nonessential. Composites of those trees were developed so that there is now one pruned tree covering eight degradation modes, another covering two degradation modes, and a third covering one degradation mode. These three composite trees form the basis for selection of test conditions in the final test plan which is now being prepared.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
A Danger-Theory-Based Immune Network Optimization Algorithm
Li, Tao; Xiao, Xin; Shi, Yuanquan
2013-01-01
Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853
Algorithm comparison for schedule optimization in MR fingerprinting.
Cohen, Ouri; Rosen, Matthew S
2017-09-01
In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.
A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.
Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei
2017-10-01
The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.
Chaotic particle swarm optimization with mutation for classification.
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.
Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad
2016-12-01
Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
75 FR 13144 - South Dakota Disaster #SD-00028
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
.... Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX..., Hyde, Jerauld, Mccook, Mcpherson, Meade, Perkins, Potter, Roberts, Sully, Turner, Walworth, Ziebach...
75 FR 32820 - Kentucky Disaster Number KY-00032
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-09
.... Small Business Administration, Processing and Disbursement Center, 14925 Kingsport Road, Fort Worth, TX..., Marshall, Mccracken, Mclean, Meade, Trigg, Union, Webster. Illinois: Hardin, Massac, Pope. Indiana...
Xiao, Feng; Kong, Lingjiang; Chen, Jian
2017-06-01
A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.
Wang, Xue; Wang, Sheng; Ma, Jun-Jie
2007-01-01
The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220
Discrete bat algorithm for optimal problem of permutation flow shop scheduling.
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
A graph decomposition-based approach for water distribution network optimization
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.
2013-04-01
A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-01-01
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ, where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority. PMID:28825648
Optimized data fusion for K-means Laplacian clustering
Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves
2011-01-01
Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Fast optimization of glide vehicle reentry trajectory based on genetic algorithm
NASA Astrophysics Data System (ADS)
Jia, Jun; Dong, Ruixing; Yuan, Xuejun; Wang, Chuangwei
2018-02-01
An optimization method of reentry trajectory based on genetic algorithm is presented to meet the need of reentry trajectory optimization for glide vehicle. The dynamic model for the glide vehicle during reentry period is established. Considering the constraints of heat flux, dynamic pressure, overload etc., the optimization of reentry trajectory is investigated by utilizing genetic algorithm. The simulation shows that the method presented by this paper is effective for the optimization of reentry trajectory of glide vehicle. The efficiency and speed of this method is comparative with the references. Optimization results meet all constraints, and the on-line fast optimization is potential by pre-processing the offline samples.
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang
2018-01-01
Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.
2017-04-01
In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
Adaptive particle swarm optimization for optimal orbital elements of binary stars
NASA Astrophysics Data System (ADS)
Attia, Abdel-Fattah
2016-12-01
The paper presents an adaptive particle swarm optimization (APSO) as an alternative method to determine the optimal orbital elements of the star η Bootis of MK type G0 IV. The proposed algorithm transforms the problem of finding periodic orbits into the problem of detecting global minimizers as a function, to get a best fit of Keplerian and Phase curves. The experimental results demonstrate that the proposed approach of APSO generally more accurate than the standard particle swarm optimization (PSO) and other published optimization algorithms, in terms of solution accuracy, convergence speed and algorithm reliability.
Comparison of genetic algorithm methods for fuel management optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-12-31
The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
Finite Set Control Transcription for Optimal Control Applications
2009-05-01
Figures 1.1 The Parameters of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Categories of Optimization Algorithms ...Programming (NLP) algorithm , such as SNOPT2 (hereafter, called the optimizer). The Finite Set Control Transcription (FSCT) method is essentially a...artificial neural networks, ge- netic algorithms , or combinations thereof for analysis.4,5 Indeed, an actual biological neural network is an example of
ERIC Educational Resources Information Center
Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao
2016-01-01
In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…
Comparison and optimization of radar-based hail detection algorithms in Slovenia
NASA Astrophysics Data System (ADS)
Stržinar, Gregor; Skok, Gregor
2018-05-01
Four commonly used radar-based hail detection algorithms are evaluated and optimized in Slovenia. The algorithms are verified against ground observations of hail at manned stations in the period between May and August, from 2002 to 2010. The algorithms are optimized by determining the optimal values of all possible algorithm parameters. A number of different contingency-table-based scores are evaluated with a combination of Critical Success Index and frequency bias proving to be the best choice for optimization. The best performance indexes are given by Waldvogel and the severe hail index, followed by vertically integrated liquid and maximum radar reflectivity. Using the optimal parameter values, a hail frequency climatology map for the whole of Slovenia is produced. The analysis shows that there is a considerable variability of hail occurrence within the Republic of Slovenia. The hail frequency ranges from almost 0 to 1.7 hail days per year with an average value of about 0.7 hail days per year.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Genetic algorithms for multicriteria shape optimization of induction furnace
NASA Astrophysics Data System (ADS)
Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo
2012-09-01
In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.
Reservoir-induced deformation and continental rheology in vicinity of Lake Mead, Nevada
NASA Astrophysics Data System (ADS)
Kaufmann, Georg; Amelung, Falk
2000-07-01
Lake Mead is a large reservoir in Nevada, formed by the construction of the 221-m-high Hoover Dam in the Black Canyon of the Colorado River. The lake encompasses an area of 635 km2, and the total volume of the reservoir is 35.5 km3. Filling started in February 1935. On the basis of a first-order leveling in 1935, several levelings were carried out to measure the deformation induced by the load of the reservoir. Subsidence in the central parts of the lake relative to the first leveling was around 120 mm (1941), 218 mm (1950), and 200 mm (1963). The subsidence pattern clearly shows relaxation of the underlying basement due to the water load of the lake, which ceased after 1950. Modeling of the relaxation process by means of layered, viscoelastic, compressible flat Earth models with a detailed representation of the spatial and temporal distribution of the water load shows that the thickness of the elastic crust underneath Lake Mead is 30±3 km. The data are also consistent with a 10-km-thick elastic upper crust and a 20-km-thick viscoelastic lower crust, with 1020 Pa s as a lower bound for its viscosity. The subcrust has an average viscosity of 1018±0.2 Pa s, a surprisingly low value. The leveling data constrain the viscosity profile down to ˜200 km depth.
Gondolellid conodonts and depositional setting of the Phosphoria Formation
Wardlaw, Bruce R.
2015-01-01
The Phosphoria Formation and related rocks were deposited over an 8.9 m.y. interval beginning approximately 274.0Ma and ending approximately 265.1Ma. The Meade Peak Phosphatic Shale Member was deposited in southeastern Idaho and adjacent Wyoming over 5.4 m.y. from approximately 273.2 to 268.6 Ma. The Retort Phosphatic Shale Member was deposited in southwestern Montana and west-central Wyoming over 1.3 m.y. from approximately 267.4 to 266.1Ma. The base of the Roadian Stage of the Middle Permian occurs within the lower phosphate zone of the Meade Peak. The base of the Wordian Stage occurs within the upper phosphate zone of the Meade Peak. The presence of a cool-water brachiopod fauna, cool-water conodont faunas, and the absence of fusulinids throughout the Phosphoria basin indicate the presence of pervasive cool, upwelling waters. Acritarchs are intimately associated with phosphorites and phosphatic shales and may have been the primary organic producer to help drive phosphate production. The gondolellid conodont fauna of the Phosphoria Formation links a geographic cline of Jinogondolella nankingensis from the Delaware basin, West Texas, to the Sverdrup basin, Canadian Arctic, and shows distinct differentiation in species distribution, as do other conodont groups, within the Phosphoria basin. Ten species and two subspecies of gondolellid conodonts are recognized from the Phosphoria Formation and related rocks that belong to Mesogondolella and Jinogondolella.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.
Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian
2011-01-13
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm
2010-01-01
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure−activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251
NASA Technical Reports Server (NTRS)
Rash, James
2014-01-01
NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.
An Efficient Optimization Method for Solving Unsupervised Data Classification Problems.
Shabanzadeh, Parvaneh; Yusof, Rubiyah
2015-01-01
Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.
Fuel management optimization using genetic algorithms and expert knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1996-09-01
The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.
Network-optimized congestion pricing : a parable, model and algorithm
DOT National Transportation Integrated Search
1995-05-31
This paper recites a parable, formulates a model and devises an algorithm for optimizing tolls on a road network. Such tolls induce an equilibrium traffic flow that is at once system-optimal and user-optimal. The parable introduces the network-wide c...
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
76 FR 19978 - Procurement List; Proposed Addition
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-11
... Location: Base Operation Support Service, Department of Public Works, Fort George D. Meade, MD. NPA..., Mission and Installation Contracting Command, Fort Eustis, VA. End of Certification Patricia Briscoe...
NASA Astrophysics Data System (ADS)
2004-06-01
In Memoriam: James R. Holton, Buford K. Meade, Mikhail I. Pudovkin; Honors: Michel Blanc, Alberto Borges, Paola Vannucchi, Michael A. Hapgood, Subir Banerjee, Lev Vinnik, John Wahr, Forrest Mozer, Vladimir N. Zharkov, Michael Ghil
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
Optimization of wireless sensor networks based on chicken swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Qingxi; Zhu, Lihua
2017-05-01
In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
Comparison of evolutionary algorithms for LPDA antenna optimization
NASA Astrophysics Data System (ADS)
Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.
2016-08-01
A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.
Optimal recombination in genetic algorithms for flowshop scheduling problems
NASA Astrophysics Data System (ADS)
Kovalenko, Julia
2016-10-01
The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.
Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz
NASA Astrophysics Data System (ADS)
Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao
2018-05-01
In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Chaotic Particle Swarm Optimization with Mutation for Classification
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Belief Propagation Algorithm for Portfolio Optimization Problems.
Shinzato, Takashi; Yasuda, Muneki
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
Results are presented from the evaluation of the performance seeking control (PSC) optimization algorithm developed by Smith et al. (1990) for F-15 aircraft, which optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. Comparisons are presented between the load cell measurements, PSC onboard model thrust calculations, and posttest state variable model computations. Actual performance improvements using the PSC algorithm are presented for its various modes. The results of using PSC algorithm are compared with similar test case results using the HIDEC algorithm.
An Effective Hybrid Evolutionary Algorithm for Solving the Numerical Optimization Problems
NASA Astrophysics Data System (ADS)
Qian, Xiaohong; Wang, Xumei; Su, Yonghong; He, Liu
2018-04-01
There are many different algorithms for solving complex optimization problems. Each algorithm has been applied successfully in solving some optimization problems, but not efficiently in other problems. In this paper the Cauchy mutation and the multi-parent hybrid operator are combined to propose a hybrid evolutionary algorithm based on the communication (Mixed Evolutionary Algorithm based on Communication), hereinafter referred to as CMEA. The basic idea of the CMEA algorithm is that the initial population is divided into two subpopulations. Cauchy mutation operators and multiple paternal crossover operators are used to perform two subpopulations parallelly to evolve recursively until the downtime conditions are met. While subpopulation is reorganized, the individual is exchanged together with information. The algorithm flow is given and the performance of the algorithm is compared using a number of standard test functions. Simulation results have shown that this algorithm converges significantly faster than FEP (Fast Evolutionary Programming) algorithm, has good performance in global convergence and stability and is superior to other compared algorithms.
Lee, Jong-Seok; Park, Cheol Hoon
2010-08-01
We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.