Global smoothing and continuation for large-scale molecular optimization
More, J.J.; Wu, Zhijun
1995-10-01
We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.
NASA Astrophysics Data System (ADS)
Mikhalev, A. S.; Rouban, A. I.
2016-04-01
The algorithms of global non-differentiable minimization of functions on set of the mixed variables: continuous and discrete with unordered specific possible values are constructed. The method of optimization is based on selective averaging of required variables, on adaptive reorganization of the sizes of admissible domain of trial movements and on use of relative values for minimised functions. Existence of discrete variables leads to solution of a sequence of global minimization problems of the functions in space of only continuous variables at the presence: 1) of their inequality restrictions for each problem; 2) of the general inequality restrictions for all problems (i.e. at the absence of dependence of functions fore inequality restrictions from discrete variables). In the first case, presence of discrete variables with unordered non-numeric possible values leads to solution of sequence of problems of global minimization of multiextreme functions on set only of continuous variables at the presence of their inequality restrictions. As a result, among the received optimum solutions the best is selected. In the second variant all minimized functions is convoluted in each sampling point in one multiextreme function and this function is minimised on continuous variables.
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2006-01-01
A new stochastic method for locating the global minimum of a multidimensional function inside a rectangular hyperbox is presented. A sampling technique is employed that makes use of the procedure known as grammatical evolution. The method can be considered as a "genetic" modification of the Controlled Random Search procedure due to Price. The user may code the objective function either in C++ or in Fortran 77. We offer a comparison of the new method with others of similar structure, by presenting results of computational experiments on a set of test functions. Program summaryTitle of program: GenPrice Catalogue identifier:ADWP Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWP Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: the tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece Programming language used: GNU-C++, GNU-C, GNU Fortran-77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.:13 135 No. of bytes in distributed program, including test data, etc.: 78 512 Distribution format: tar. gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions, i.e. minima with values
Towards continuous global measurements and optimal emission estimates of NF3
NASA Astrophysics Data System (ADS)
Arnold, T.; Muhle, J.; Salameh, P.; Harth, C.; Ivy, D. J.; Weiss, R. F.
2011-12-01
We present an analytical method for the continuous in situ measurement of nitrogen trifluoride (NF3) - an anthropogenic gas with a global warming potential of ~16800 over a 100 year time horizon. NF3 is not included in national reporting emissions inventories under the United Nations Framework Convention on Climate Change (UNFCCC). However, it is a rapidly emerging greenhouse gas due to emission from a growing number of manufacturing facilities with increasing output and modern end-use applications, namely in microcircuit etching, and in production of flat panel displays and thin-film photovoltaic cells. Despite success in measuring the most volatile long lived halogenated species such as CF4, the Medusa preconcentration GC/MS system of Miller et al. (2008) is unable to detect NF3 under remote operation. Using altered techniques of gas separation and chromatography after initial preconcentration, we are now able to make continuous atmospheric measurements of NF3 with average precisions < 1.5% (1 s.d.) for modern background air samples. Most notably, the suite of gases previously measured by Medusa (the significant halogenated species listed under both the Montreal and Kyoto Protocols), can also be quantified from the same sample. Our technique was used to extend the most recent atmospheric measurements into 2011 and complete the background Southern Hemispheric trend over the past three decades using samples from the Cape Grim Air Archive. Using these latest results and those from Weiss et al. (2008) we present optimised annual emission estimates using a 2D atmospheric transport model (AGAGE 12-box model) and an inverse method (Rigby et al., 2011). We calculate emissions during 2010 of 7.6 +/- 1.3 kt (equivalent to 13 million metric tons of CO2), which is estimated to be around 6% of the total NF3 produced. Emission factors are shown to have reduced over the last decade; however, rising production and end-use have caused the average global atmospheric concentration
Software for global optimization
Mockus, L.
1994-12-31
The interactive graphical software that implements numeric methods and other techniques to solve global optimization problems is presented. The Bayesian approach to the optimization is the underlying idea of numeric methods used. Software is designed to solve deterministic and stochastic problems of different complexity and with many variables. It includes global and local optimization methods for differentiable and nondifferentiable functions. Implemented numerical techniques for global optimization vary from simple Monte-Carlo simulation to Bayesian methods by J. Mockus and extrapolation theory based methods by Zilinskas. Local optimization techniques includes simplex method of Nelder and Mead method of nonlinear programming by Shitkowski, and method of stochastic approximation with Bayesian step size control by J. Mockus. Software is interactive, it allows user to start and stop chosen method of global or local optimization, define and change its parameters and examine the solution process. Out-put from solution process is both numerical and graphical. Currently available graphical features are the projection of the objective function on a chosen plane and convergence plot. Both these features let the user easily observe solution process and interactively modify it. More features can be added in a standard way. It is up to the user how many graphical and numerical output features activate or deactivate at any given time. Software is implemented in C++ using X Windows as graphical platform.
Homotopy optimization methods for global optimization.
Dunlavy, Daniel M.; O'Leary, Dianne P.
2005-12-01
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Bayesian approach to global discrete optimization
Mockus, J.; Mockus, A.; Mockus, L.
1994-12-31
We discuss advantages and disadvantages of the Bayesian approach (average case analysis). We present the portable interactive version of software for continuous global optimization. We consider practical multidimensional problems of continuous global optimization, such as optimization of VLSI yield, optimization of composite laminates, estimation of unknown parameters of bilinear time series. We extend Bayesian approach to discrete optimization. We regard the discrete optimization as a multi-stage decision problem. We assume that there exists some simple heuristic function which roughly predicts the consequences of the decisions. We suppose randomized decisions. We define the probability of the decision by the randomized decision function depending on heuristics. We fix this function with exception of some parameters. We repeat the randomized decision several times at the fixed values of those parameters and accept the best decision as the result. We optimize the parameters of the randomized decision function to make the search more efficient. Thus we reduce the discrete optimization problem to the continuous problem of global stochastic optimization. We solve this problem by the Bayesian methods of continuous global optimization. We describe the applications to some well known An problems of discrete programming, such as knapsack, traveling salesman, and scheduling.
Dengue: a continuing global threat
Guzman, Maria G.; Halstead, Scott B.; Artsob, Harvey; Buchy, Philippe; Farrar, Jeremy; Gubler, Duane J.; Hunsperger, Elizabeth; Kroeger, Axel; Margolis, Harold S.; Martínez, Eric; Nathan, Michael B.; Pelegrino, Jose Luis; Simmons, Cameron; Yoksan, Sutee; Peeling, Rosanna W.
2014-01-01
Dengue fever and dengue haemorrhagic fever are important arthropod-borne viral diseases. Each year, there are ~50 million dengue infections and ~500,000 individuals are hospitalized with dengue haemorrhagic fever, mainly in Southeast Asia, the Pacific and the Americas. Illness is produced by any of the four dengue virus serotypes. A global strategy aimed at increasing the capacity for surveillance and outbreak response, changing behaviours and reducing the disease burden using integrated vector management in conjunction with early and accurate diagnosis has been advocated. Antiviral drugs and vaccines that are currently under development could also make an important contribution to dengue control in the future. PMID:21079655
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Intervals in evolutionary algorithms for global optimization
Patil, R.B.
1995-05-01
Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
Building a global business continuity programme.
Lazcano, Michael
2014-01-01
Business continuity programmes provide an important function within organisations, especially when aligned with and supportive of the organisation's goals, objectives and organisational culture. Continuity programmes for large, complex international organisations, unlike those for compact national companies, are more difficult to design, build, implement and maintain. Programmes for international organisations require attention to structural design, support across organisational leadership and hierarchy, seamless integration with the organisation's culture, measured success and demonstrated value. This paper details practical, but sometimes overlooked considerations for building successful global business continuity programmes. PMID:24854730
Multiplier-continuation algorthms for constrained optimization
NASA Technical Reports Server (NTRS)
Lundberg, Bruce N.; Poore, Aubrey B.; Bing, Yang
1989-01-01
Several path following algorithms based on the combination of three smooth penalty functions, the quadratic penalty for equality constraints and the quadratic loss and log barrier for inequality constraints, their modern counterparts, augmented Lagrangian or multiplier methods, sequential quadratic programming, and predictor-corrector continuation are described. In the first phase of this methodology, one minimizes the unconstrained or linearly constrained penalty function or augmented Lagrangian. A homotopy path generated from the functions is then followed to optimality using efficient predictor-corrector continuation methods. The continuation steps are asymptotic to those taken by sequential quadratic programming which can be used in the final steps. Numerical test results show the method to be efficient, robust, and a competitive alternative to sequential quadratic programming.
A novel metaheuristic for continuous optimization problems: Virus optimization algorithm
NASA Astrophysics Data System (ADS)
Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue
2016-01-01
A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.
Global optimality of extremals: An example
NASA Technical Reports Server (NTRS)
Kreindler, E.; Newman, F.
1980-01-01
The question of the existence and location of Darboux points is crucial for minimally sufficient conditions for global optimality and for computation of optimal trajectories. A numerical investigation is presented of the Darboux points and their relationship with conjugate points for a problem of minimum fuel, constant velocity, and horizontal aircraft turns to capture a line. This simple second order optimal control problem shows that ignoring the possible existence of Darboux points may play havoc with the computation of optimal trajectories.
Global and Local Optimization Algorithms for Optimal Signal Set Design
Kearsley, Anthony J.
2001-01-01
The problem of choosing an optimal signal set for non-Gaussian detection was reduced to a smooth inequality constrained mini-max nonlinear programming problem by Gockenbach and Kearsley. Here we consider the application of several optimization algorithms, both global and local, to this problem. The most promising results are obtained when special-purpose sequential quadratic programming (SQP) algorithms are embedded into stochastic global algorithms.
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
Enlightening Globalization: An Opportunity for Continuing Education
ERIC Educational Resources Information Center
Reimers, Fernando
2009-01-01
Globalization presents a new social context for educational institutions from elementary schools to universities. In response to this new context, schools and universities are slowly changing their ways. These changes range from altering the curriculum so that students understand the process of globalization itself, or developing competencies…
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Global source optimization for MEEF and OPE
NASA Astrophysics Data System (ADS)
Matsui, Ryota; Noda, Tomoya; Aoyama, Hajime; Kita, Naonori; Matsuyama, Tomoyuki; Flagello, Donis
2013-04-01
This work describes freeform source optimization considering mask error enhancement factor (MEEF), optical proximity effect (OPE), process window, and hardware-specific constraints. Our algorithm allows users to define maximum allowed MEEF and OPE error as constraints without defining weights among the metrics. We also consider hardware specific constraints, so that the optimized source is suitable to be realized in Nikon's Intelligent Illumination hardware. Our approach utilizes a global optimization procedure to arrive at a freeform source shape solution, and since each source grid-point is assigned as variable, the source solution encompasses the maximum amount of degrees of freedom.
Electronic neural networks for global optimization
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.
1990-01-01
An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.
Global search acceleration in the nested optimization scheme
NASA Astrophysics Data System (ADS)
Grishagin, Vladimir A.; Israfilov, Ruslan A.
2016-06-01
Multidimensional unconstrained global optimization problem with objective function under Lipschitz condition is considered. For solving this problem the dimensionality reduction approach on the base of the nested optimization scheme is used. This scheme reduces initial multidimensional problem to a family of one-dimensional subproblems being Lipschitzian as well and thus allows applying univariate methods for the execution of multidimensional optimization. For two well-known one-dimensional methods of Lipschitz optimization the modifications providing the acceleration of the search process in the situation when the objective function is continuously differentiable in a vicinity of the global minimum are considered and compared. Results of computational experiments on conventional test class of multiextremal functions confirm efficiency of the modified methods.
Global Optimality of the Successive Maxbet Algorithm.
ERIC Educational Resources Information Center
Hanafi, Mohamed; ten Berge, Jos M. F.
2003-01-01
It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
A Novel Particle Swarm Optimization Algorithm for Global Optimization.
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Global time optimal motions of robotic manipulators in the presence of obstacles
NASA Technical Reports Server (NTRS)
Shiller, Zvi; Dubowsky, Steven
1988-01-01
A practical method to obtain the global time optimal motions of robotic manipulators is presented. This method takes into account the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. Previously developed methods of optimizing manipulator motions along given paths and a local path optimization are utilized. A set of best paths is obtained first in a global search over the manipulator workspace, using graph search and hierarchical pruning techniques. These paths are used as initial conditions for a continuous path optimization to yield the global optimal motion. Examples of optimized motions of a six-degree-of-freedom manipulator, operating in a three-dimensional space with obstacles, are presented.
Global optimization algorithm for heat exchanger networks
Quesada, I.; Grossmann, I.E. )
1993-03-01
This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem is used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.
Global optimization of bilinear engineering design models
Grossmann, I.; Quesada, I.
1994-12-31
Recently Quesada and Grossmann have proposed a global optimization algorithm for solving NLP problems involving linear fractional and bilinear terms. This model has been motivated by a number of applications in process design. The proposed method relies on the derivation of a convex NLP underestimator problem that is used within a spatial branch and bound search. This paper explores the use of alternative bounding approximations for constructing the underestimator problem. These are applied in the global optimization of problems arising in different engineering areas and for which different relaxations are proposed depending on the mathematical structure of the models. These relaxations include linear and nonlinear underestimator problems. Reformulations that generate additional estimator functions are also employed. Examples from process design, structural design, portfolio investment and layout design are presented.
Competing intelligent search agents in global optimization
Streltsov, S.; Vakili, P.; Muchnik, I.
1996-12-31
In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.
Solving global optimization problems on GPU cluster
NASA Astrophysics Data System (ADS)
Barkalov, Konstantin; Gergel, Victor; Lebedev, Ilya
2016-06-01
The paper contains the results of investigation of a parallel global optimization algorithm combined with a dimension reduction scheme. This allows solving multidimensional problems by means of reducing to data-independent subproblems with smaller dimension solved in parallel. The new element implemented in the research consists in using several graphic accelerators at different computing nodes. The paper also includes results of solving problems of well-known multiextremal test class GKLS on Lobachevsky supercomputer using tens of thousands of GPU cores.
Global optimization of actively morphing flapping wings
NASA Astrophysics Data System (ADS)
Ghommem, Mehdi; Hajj, Muhammad R.; Mook, Dean T.; Stanford, Bret K.; Beran, Philip S.; Snyder, Richard D.; Watson, Layne T.
2012-08-01
We consider active shape morphing to optimize the flight performance of flapping wings. To this end, we combine a three-dimensional version of the unsteady vortex lattice method (UVLM) with a deterministic global optimization algorithm to identify the optimal kinematics that maximize the propulsive efficiency under lift and thrust constraints. The UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori. Two types of morphing parameterization are investigated here—trigonometric and spline-based. The results show that the spline-based morphing, which requires specification of more design variables, yields a significant improvement in terms of propulsive efficiency. Furthermore, we remark that the average value of the lift coefficient in the optimized kinematics remained equal to the value in the baseline case (without morphing). This indicates that morphing is most efficiently used to generate thrust and not to increase lift beyond the basic value obtained by flapping only. Besides, our study gives comparable optimal efficiencies to those obtained from previous studies based on gradient-based optimization, but completely different design points (especially for the spline-based morphing), which would indicate that the design space associated with the flapping kinematics is very complex.
On optimal nonlinear estimation. I - Continuous observation.
NASA Technical Reports Server (NTRS)
Lo, J. T.
1973-01-01
A generalization of Bucy's (1965) representation theorem is obtained under very weak hypotheses. The generalized theorem is shown to play the same role in the case of general optimal estimation for an arbitrary random process as does the Bucy theorem in the case of optimal filtering for a diffusion process. At least for the models considered, the possibility is pointed out to reduce all sequential estimation problems to the problem of filtering. Hence, filtering theory is seen to represent the core of estimation theory, and is believed to define the direction in which future research should be focused.
Tabu search method with random moves for globally optimal design
NASA Astrophysics Data System (ADS)
Hu, Nanfang
1992-09-01
Optimum engineering design problems are usually formulated as non-convex optimization problems of continuous variables. Because of the absence of convexity structure, they can have multiple minima, and global optimization becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. The tabu search method with random moves to solve approximately these problems is introduced. Its reliability and efficiency are examined with the help of standard test functions. By the analysis of the implementations, it is seen that this method is easy to use, and no derivative information is necessary. It outperforms the random search method and composite genetic algorithm. In particular, it is applied to minimum weight design examples of a three-bar truss, coil springs, a Z-section and a channel section. For the channel section, the optimal design using the tabu search method with random moves saved 26.14 percent over the weight of the SUMT method.
Globally Optimal Segmentation of Permanent-Magnet Systems
NASA Astrophysics Data System (ADS)
Insinga, A. R.; Bjørk, R.; Smith, A.; Bahl, C. R. H.
2016-06-01
Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective functional that is linear in the magnetic field. This approach, however, yields a continuously varying remanent flux density, while in practical applications, magnetic assemblies are realized by combining uniformly magnetized segments. The problem of determining the optimal shape of each of these segments remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast into this form, the globally optimal solution can be easily computed employing dynamic programming.
Periodic optimization of continuous microbial growth processes.
Abulesz, E M; Lyberatos, G
1987-06-01
Steady-state operation of continuous bioreactors is not necessarily the optimum type of operation. The method of pi-criterion is used in this work to determine whether periodic variation of the dilution rate can enhance the performance of continuous fermentation processes. It is found that the presence of time delay in the dynamic response of the chemostat renders a periodic operation of bioreactors, used for biomass production, superior to any steady-state operation. Also, employing Williams' structured model it is shown that cycling improves the average protein productivity. PMID:18576558
Global Grazing Systems: Their Continuing Importance in Meeting Global Demand
NASA Astrophysics Data System (ADS)
Davis, K. F.; D'Odorico, P.
2014-12-01
Animal production exerts significant demand on land, water and food resources and is an extensive means by which humans modify natural systems. Demand for animal source foods has more than tripled over the past 50 years due to population growth and dietary change. To meet this demand, livestock intensification (e.g. concentrated animal feeding operations) has increased and with it the water, nitrogen and carbon footprints of animal production. However, grass-fed systems continue to contribute significantly to overall animal production. To date, little is known about the contributions of grass- and grain-fed systems to animal calorie production, how this has changed through time and to what extent these two systems are sensitive to climate. Using a calorie-based approach we hypothesize that grain-fed systems are increasing in importance (with serious implications for water and nutrient demand) and that rangeland productivity is correlated with rainfall. Our findings show that grass-fed systems made up the majority of animal calorie production since 1960 years but that the relative contribution of grain-fed system has increased (from 27% to 49%). This rapid transition towards grain-fed animal production is largely a result of changing diets demand, as we found the growth of grass-fed production only kept pace with population growth. On a regional scale, we find that Asia has been the major contributor to the increase in grass-fed animal calorie production and that Africa has undergone the most drastic transition from grass-fed to grain-fed dependence. Finally, as expected we see a positive relationship between rangeland productivity and precipitation and a shift from dairy- to meat-dominated production going from drier to wetter climates. This study represents a new means of analyzing the food security of animal products and an important step in understanding the historic trends of animal production, their relation to climate, their prospects for the future and their
On Global Optimal Sailplane Flight Strategy
NASA Technical Reports Server (NTRS)
Sander, G. J.; Litt, F. X.
1979-01-01
The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design
NASA Astrophysics Data System (ADS)
Gao, Wei
2016-05-01
The objective function of displacement back analysis for rock parameters in underground engineering is a very complicated nonlinear multiple hump function. The global optimization method can solve this problem very well. However, many numerical simulations must be performed during the optimization process, which is very time consuming. Therefore, it is important to improve the computational efficiency of optimization back analysis. To improve optimization back analysis, a new global optimization, immunized continuous ant colony optimization, is proposed. This is an improved continuous ant colony optimization using the basic principles of an artificial immune system and evolutionary algorithm. Based on this new global optimization, a new displacement optimization back analysis for rock parameters is proposed. The computational performance of the new back analysis is verified through a numerical example and a real engineering example. The results show that this new method can be used to obtain suitable parameters of rock mass with higher accuracy and less effort than previous methods. Moreover, the new back analysis is very robust.
LDRD Final Report: Global Optimization for Engineering Science Problems
HART,WILLIAM E.
1999-12-01
For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.
Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)
Not Available
2013-07-01
This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.
Global Optimization Techniques for Fluid Flow and Propulsion Devices
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Raj; Tucker, Kevin; Griffin, Lisa; Dorney, Dan; Huber, Frank; Tran, Ken; Turner, James E. (Technical Monitor)
2001-01-01
This viewgraph presentation gives an overview of global optimization techniques for fluid flow and propulsion devices. Details are given on the need, characteristics, and techniques for global optimization. The techniques include response surface methodology (RSM), neural networks and back-propagation neural networks, design of experiments, face centered composite design (FCCD), orthogonal arrays, outlier analysis, and design optimization.
Quantifying the likelihood of a continued hiatus in global warming
NASA Astrophysics Data System (ADS)
Roberts, C. D.; Palmer, M. D.; McNeall, D.; Collins, M.
2015-04-01
Since the end of the twentieth century, global mean surface temperature has not risen as rapidly as predicted by global climate models (GCMs). This discrepancy has become known as the global warming `hiatus’ and a variety of mechanisms have been proposed to explain the observed slowdown in warming. Focusing on internally generated variability, we use pre-industrial control simulations from an observationally constrained ensemble of GCMs and a statistical approach to evaluate the expected frequency and characteristics of variability-driven hiatus periods and their likelihood of future continuation. Given an expected forced warming trend of ~0.2 K per decade, our constrained ensemble of GCMs implies that the probability of a variability-driven 10-year hiatus is ~10%, but less than 1% for a 20-year hiatus. Although the absolute probability of a 20-year hiatus is small, the probability that an existing 15-year hiatus will continue another five years is much higher (up to 25%). Therefore, given the recognized contribution of internal climate variability to the reduced rate of global warming during the past 15 years, we should not be surprised if the current hiatus continues until the end of the decade. Following the termination of a variability-driven hiatus, we also show that there is an increased likelihood of accelerated global warming associated with release of heat from the sub-surface ocean and a reversal of the phase of decadal variability in the Pacific Ocean.
SamACO: variable sampling ant colony optimization algorithm for continuous optimization.
Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou
2010-12-01
An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409
An approximation based global optimization strategy for structural synthesis
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A.
1991-01-01
A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.
Parameter-space correlations of the optimal statistic for continuous gravitational-wave detection
Pletsch, Holger J.
2008-11-15
The phase parameters of matched-filtering searches for continuous gravitational-wave signals are sky position, frequency, and frequency time-derivatives. The space of these parameters features strong global correlations in the optimal detection statistic. For observation times smaller than 1 yr, the orbital motion of the Earth leads to a family of global-correlation equations which describes the 'global maximum structure' of the detection statistic. The solution to each of these equations is a different hypersurface in parameter space. The expected detection statistic is maximal at the intersection of these hypersurfaces. The global maximum structure of the detection statistic from stationary instrumental-noise artifacts is also described by the global-correlation equations. This permits the construction of a veto method which excludes false candidate events.
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
D'Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
GMG - A guaranteed global optimization algorithm: Application to remote sensing
D'Helon, Cassius; Protopopescu, Vladimir A; Wells, Jack C; Barhen, Jacob
2007-01-01
We investigate the role of additional information in reducing the computational complexity of the global optimization problem (GOP). Following this approach, we develop GMG -- an algorithm to find the Global Minimum with a Guarantee. The new algorithm breaks up an originally continuous GOP into a discrete (grid) search problem followed by a descent problem. The discrete search identifies the basin of attraction of the global minimum after which the actual location of the minimizer is found upon applying a descent algorithm. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions. We then illustrate the performance of the the validated algorithm on a simple realization of the monocular passive ranging (MPR) problem in remote sensing, which consists of identifying the range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem is set as a GOP whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. We solve the GOP using GMG and report on the performance of the algorithm.
Aerodynamic design optimization by using a continuous adjoint method
NASA Astrophysics Data System (ADS)
Luo, JiaQi; Xiong, JunTao; Liu, Feng
2014-07-01
This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.
Strategies for Global Optimization of Temporal Preferences
NASA Technical Reports Server (NTRS)
Morris, Paul; Morris, Robert; Khatib, Lina; Ramakrishnan, Sailesh
2004-01-01
A temporal reasoning problem can often be naturally characterized as a collection of constraints with associated local preferences for times that make up the admissible values for those constraints. Globally preferred solutions to such problems emerge as a result of well-defined operations that compose and order temporal assignments. The overall objective of this work is a characterization of different notions of global preference, and to identify tractable sub-classes of temporal reasoning problems incorporating these notions. This paper extends previous results by refining the class of useful notions of global temporal preference that are associated with problems that admit of tractable solution techniques. This paper also answers the hitherto open question of whether problems that seek solutions that are globally preferred from a Utilitarian criterion for global preference can be found tractably.
WFH: closing the global gap--achieving optimal care.
Skinner, Mark W
2012-07-01
For 50 years, the World Federation of Hemophilia (WFH) has been working globally to close the gap in care and to achieve Treatment for All patients, men and women, with haemophilia and other inherited bleeding disorders, regardless of where they might live. The WFH estimates that more than one in 1000 men and women has a bleeding disorder equating to 6,900,000 worldwide. To close the gap in care between developed and developing nations a continued focus on the successful strategies deployed heretofore will be required. However, in response to the rapid advances in treatment and emerging therapeutic advances on the horizon it will also require fresh approaches and renewed strategic thinking. It is difficult to predict what each therapeutic advance on the horizon will mean for the future, but there is no doubt that we are in a golden age of research and development, which has the prospect of revolutionizing treatment once again. An improved understanding of "optimal" treatment is fundamental to the continued evolution of global care. The challenges of answering government and payer demands for evidence-based medicine, and cost justification for the introduction and enhancement of treatment, are ever-present and growing. To sustain and improve care it is critical to build the body of outcome data for individual patients, within haemophilia treatment centers (HTCs), nationally, regionally and globally. Emerging therapeutic advances (longer half-life therapies and gene transfer) should not be justified or brought to market based only on the notion that they will be economically more affordable, although that may be the case, but rather more importantly that they will be therapeutically more advantageous. Improvements in treatment adherence, reductions in bleeding frequency (including microhemorrhages), better management of trough levels, and improved health outcomes (including quality of life) should be the foremost considerations. As part of a new WFH strategic plan
GenMin: An enhanced genetic algorithm for global optimization
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, I. E.
2008-06-01
A new method that employs grammatical evolution and a stopping rule for finding the global minimum of a continuous multidimensional, multimodal function is considered. The genetic algorithm used is a hybrid genetic algorithm in conjunction with a local search procedure. We list results from numerical experiments with a series of test functions and we compare with other established global optimization methods. The accompanying software accepts objective functions coded either in Fortran 77 or in C++. Program summaryProgram title: GenMin Catalogue identifier: AEAR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 810 No. of bytes in distributed program, including test data, etc.: 436 613 Distribution format: tar.gz Programming language: GNU-C++, GNU-C, GNU Fortran 77 Computer: The tool is designed to be portable in all systems running the GNU C++ compiler Operating system: The tool is designed to be portable in all systems running the GNU C++ compiler RAM: 200 KB Word size: 32 bits Classification: 4.9 Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a least squares type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Solution method: Grammatical evolution and a stopping rule. Running time: Depending on the
Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui
2010-02-01
In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems. PMID:20180252
Applications of parallel global optimization to mechanics problems
NASA Astrophysics Data System (ADS)
Schutte, Jaco Francois
Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.
Modeling and Global Optimization of DNA separation
Fahrenkopf, Max A.; Ydstie, B. Erik; Mukherjee, Tamal; Schneider, James W.
2014-01-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Modeling and Global Optimization of DNA separation.
Fahrenkopf, Max A; Ydstie, B Erik; Mukherjee, Tamal; Schneider, James W
2014-05-01
We develop a non-convex non-linear programming problem that determines the minimum run time to resolve different lengths of DNA using a gel-free micelle end-labeled free solution electrophoresis separation method. Our optimization framework allows for efficient determination of the utility of different DNA separation platforms and enables the identification of the optimal operating conditions for these DNA separation devices. The non-linear programming problem requires a model for signal spacing and signal width, which is known for many DNA separation methods. As a case study, we show how our approach is used to determine the optimal run conditions for micelle end-labeled free-solution electrophoresis and examine the trade-offs between a single capillary system and a parallel capillary system. Parallel capillaries are shown to only be beneficial for DNA lengths above 230 bases using a polydisperse micelle end-label otherwise single capillaries produce faster separations. PMID:24764606
Global search algorithm for optimal control
NASA Technical Reports Server (NTRS)
Brocker, D. H.; Kavanaugh, W. P.; Stewart, E. C.
1970-01-01
Random-search algorithm employs local and global properties to solve two-point boundary value problem in Pontryagin maximum principle for either fixed or variable end-time problems. Mixed boundary value problem is transformed to an initial value problem. Mapping between initial and terminal values utilizes hybrid computer.
An adaptive ant colony system algorithm for continuous-space optimization problems.
Li, Yan-jun; Wu, Tie-jun
2003-01-01
Ant colony algorithms comprise a novel category of evolutionary computation methods for optimization problems, especially for sequencing-type combinatorial optimization problems. An adaptive ant colony algorithm is proposed in this paper to tackle continuous-space optimization problems, using a new objective-function-based heuristic pheromone assignment approach for pheromone update to filtrate solution candidates. Global optimal solutions can be reached more rapidly by self-adjusting the path searching behaviors of the ants according to objective values. The performance of the proposed algorithm is compared with a basic ant colony algorithm and a Square Quadratic Programming approach in solving two benchmark problems with multiple extremes. The results indicated that the efficiency and reliability of the proposed algorithm were greatly improved. PMID:12656341
Hölder Continuity and Injectivity of Optimal Maps
NASA Astrophysics Data System (ADS)
Figalli, Alessio; Kim, Young-Heon; McCann, Robert J.
2013-09-01
Consider transportation of one distribution of mass onto another, chosen to optimize the total expected cost, where cost per unit mass transported from x to y is given by a smooth function c( x, y). If the source density f +( x) is bounded away from zero and infinity in an open region {U' subset {R}^n}, and the target density f -( y) is bounded away from zero and infinity on its support {overline{V} subset {R}^n}, which is strongly c-convex with respect to U', and the transportation cost c satisfies the {{A3}_w} condition of Trudinger and Wang (Ann Sc Norm Super Pisa Cl Sci 5, 8(1):143-174, 2009), we deduce the local Hölder continuity and injectivity of the optimal map inside U' (so that the associated potential u belongs to {C^{1,α}_{loc}(U')}). Here the exponent α > 0 depends only on the dimension and the bounds on the densities, but not on c. Our result provides a crucial step in the low/interior regularity setting: in a sequel ( Figalli et al., J Eur Math Soc (JEMS), 1131-1166, 2013), we use it to establish regularity of optimal maps with respect to the Riemannian distance squared on arbitrary products of spheres. Three key tools are introduced in the present paper. Namely, we first find a transformation that under {{A3}_w} makes c-convex functions level-set convex (as was also obtained independently from us by Liu (Calc Var Partial Diff Eq 34:435-451, 2009)). We then derive new Alexandrov type estimates for the level-set convex c-convex functions, and a topological lemma showing that optimal maps do not mix the interior with the boundary. This topological lemma, which does not require {{A3}_w}, is needed by Figalli and Loeper (Calc Var Partial Diff Eq 35:537-550, 2009) to conclude the continuity of optimal maps in two dimensions. In higher dimensions, if the densities f ± are Hölder continuous, our result permits continuous differentiability of the map inside U' (in fact, {C^{2,α}_{loc}} regularity of the associated potential) to be deduced from the work
Globally optimal trial design for local decision making.
Eckermann, Simon; Willan, Andrew R
2009-02-01
Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. PMID:18435429
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (ESTSC)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
Optimal continuous variable quantum teleportation protocol for realistic settings
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2015-03-01
We show the optimal setup that allows Alice to teleport coherent states | α > to Bob giving the greatest fidelity (efficiency) when one takes into account two realistic assumptions. The first one is the fact that in any actual implementation of the continuous variable teleportation protocol (CVTP) Alice and Bob necessarily share non-maximally entangled states (two-mode finitely squeezed states). The second one assumes that Alice's pool of possible coherent states to be teleported to Bob does not cover the whole complex plane (| α | < ∞). The optimal strategy is achieved by tuning three parameters in the original CVTP, namely, Alice's beam splitter transmittance and Bob's displacements in position and momentum implemented on the teleported state. These slight changes in the protocol are currently easy to be implemented and, as we show, give considerable gain in performance for a variety of possible pool of input states with Alice.
Neural network training with global optimization techniques.
Yamazaki, Akio; Ludermir, Teresa B
2003-04-01
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920
Dispositional optimism and terminal decline in global quality of life.
Zaslavsky, Oleg; Palgi, Yuval; Rillamas-Sun, Eileen; LaCroix, Andrea Z; Schnall, Eliezer; Woods, Nancy F; Cochrane, Barbara B; Garcia, Lorena; Hingle, Melanie; Post, Stephen; Seguin, Rebecca; Tindle, Hilary; Shrira, Amit
2015-06-01
We examined whether dispositional optimism relates to change in global quality of life (QOL) as a function of either chronological age or years to impending death. We used a sample of 2,096 deceased postmenopausal women from the Women's Health Initiative clinical trials who were enrolled in the 2005-2010 Extension Study and for whom at least 1 global QOL and optimism measure were analyzed. Growth curve models were examined. Competing models were contrasted using model fit criteria. On average, levels of global QOL decreased with both higher age and closer proximity to death (e.g., M(score) = 7.7 eight years prior to death vs. M(score) = 6.1 one year prior to death). A decline in global QOL was better modeled as a function of distance to death (DtD) than as a function of chronological age (Bayesian information criterion [BIC](DtD) = 22,964.8 vs. BIC(age) = 23,322.6). Optimism was a significant correlate of both linear (estimate(DtD) = -0.01, SE(DtD) = 0.005; ρ = 0.004) and quadratic (estimate(DtD) = -0.006, SE(DtD) = 0.002; ρ = 0.004) terminal decline in global QOL so that death-related decline in global QOL was steeper among those with a high level of optimism than those with a low level of optimism. We found that dispositional optimism helps to maintain positive psychological perspective in the face of age-related decline. Optimists maintain higher QOL compared with pessimists when death-related trajectories were considered; however, the gap between those with high optimism and those with low optimism progressively attenuated with closer proximity to death, to the point that is became nonsignificant at the time of death. PMID:25938553
Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
We are investigating the use of Pareto multi-objective global optimization (PMOGO) methods to solve numerically complicated geophysical inverse problems. PMOGO methods can be applied to highly nonlinear inverse problems, to those where derivatives are discontinuous or simply not obtainable, and to those were multiple minima exist in the problem space. PMOGO methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. This allows a more complete assessment of the possibilities and provides opportunities to calculate statistics regarding the likelihood of particular model features. We are applying PMOGO methods to four classes of inverse problems. The first are discrete-body problems where the inversion determines values of several parameters that define the location, orientation, size and physical properties of an anomalous body represented by a simple shape, for example a sphere, ellipsoid, cylinder or cuboid. A PMOGO approach can determine not only the optimal shape parameters for the anomalous body but also the optimal shape itself. Furthermore, when one expects several anomalous bodies in the subsurface, a PMOGO inversion approach can determine an optimal number of parameterized bodies. The second class of inverse problems are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The third class of problems are lithological inversions, which are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the fourth class, surface geometry inversions, we consider a fundamentally different type of problem in which a model comprises wireframe surfaces representing contacts between rock units. The physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. Surface geometry inversion can be
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; Huff, Joshua; Tawarmalani, Mohit; Agrawal, Rakesh
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Communication: Optimal parameters for basin-hopping global optimization based on Tsallis statistics
Shang, C. Wales, D. J.
2014-08-21
A fundamental problem associated with global optimization is the large free energy barrier for the corresponding solid-solid phase transitions for systems with multi-funnel energy landscapes. To address this issue we consider the Tsallis weight instead of the Boltzmann weight to define the acceptance ratio for basin-hopping global optimization. Benchmarks for atomic clusters show that using the optimal Tsallis weight can improve the efficiency by roughly a factor of two. We present a theory that connects the optimal parameters for the Tsallis weighting, and demonstrate that the predictions are verified for each of the test cases.
Similarity-based global optimization of buildings in urban scene
NASA Astrophysics Data System (ADS)
Zhu, Quansheng; Zhang, Jing; Jiang, Wanshou
2013-10-01
In this paper, an approach for the similarity-based global optimization of buildings in urban scene is presented. In the past, most researches concentrated on single building reconstruction, making it difficult to reconstruct reliable models from noisy or incomplete point clouds. To obtain a better result, a new trend is to utilize the similarity among the buildings. Therefore, a new similarity detection and global optimization strategy is adopted to modify local-fitting geometric errors. Firstly, the hierarchical structure that consists of geometric, topological and semantic features is constructed to represent complex roof models. Secondly, similar roof models can be detected by combining primitive structure and connection similarities. At last, the global optimization strategy is applied to preserve the consistency and precision of similar roof structures. Moreover, non-local consolidation is adapted to detect small roof parts. The experiments reveal that the proposed method can obtain convincing roof models and promote the reconstruction quality of 3D buildings in urban scene.
A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.
Zhang, Geng; Li, Yangmin
2016-06-01
It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems. PMID:26292352
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
Restarted local search algorithms for continuous black box optimization.
Pošík, Petr; Huyer, Waltraud
2012-01-01
Several local search algorithms for real-valued domains (axis parallel line search, Nelder-Mead simplex search, Rosenbrock's algorithm, quasi-Newton method, NEWUOA, and VXQR) are described and thoroughly compared in this article, embedding them in a multi-start method. Their comparison aims (1) to help the researchers from the evolutionary community to choose the right opponent for their algorithm (to choose an opponent that would constitute a hard-to-beat baseline algorithm), (2) to describe individual features of these algorithms and show how they influence the algorithm on different problems, and (3) to provide inspiration for the hybridization of evolutionary algorithms with these local optimizers. The recently proposed Comparing Continuous Optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that in low dimensional spaces, the old method of Nelder and Mead is still the most successful among those compared, while in spaces of higher dimensions, it is better to choose an algorithm based on quadratic modeling, such as NEWUOA or a quasi-Newton method. PMID:22779407
Dutta, Amit K; Tan, Jasmine; Napadensky, Boris; Zydney, Andrew L; Shinkazh, Oleg
2016-03-01
Recent studies have demonstrated that continuous countercurrent tangential chromatography (CCTC) can effectively purify monoclonal antibodies from clarified cell culture fluid. CCTC has the potential to overcome many of the limitations of conventional packed bed protein A chromatography. This paper explores the optimization of CCTC in terms of product yield, impurity removal, overall productivity, and buffer usage. Modeling was based on data from bench-scale process development and CCTC experiments for protein A capture of two clarified Chinese Hamster Ovary cell culture feedstocks containing monoclonal antibodies provided by industrial partners. The impact of resin binding capacity and kinetics, as well as staging strategy and buffer recycling, was assessed. It was found that optimal staging in the binding step provides better yield and increases overall system productivity by 8-16%. Utilization of higher number of stages in the wash and elution steps can lead to significant decreases in buffer usage (∼40% reduction) as well as increased removal of impurities (∼2 log greater removal). Further reductions in buffer usage can be obtained by recycling of buffer in the wash and regeneration steps (∼35%). Preliminary results with smaller particle size resins show that the productivity of the CCTC system can be increased by 2.5-fold up to 190 g of mAb/L of resin/hr due to the reduction in mass transfer limitations in the binding step. These results provide a solid framework for designing and optimizing CCTC technology for capture applications. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:430-439, 2016. PMID:26914276
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating. PMID:24663856
A global optimization paradigm based on change of measures.
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-07-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as 'scrambling' and 'selection'. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
Simple proof of the global optimality of the Hohmann transfer
NASA Technical Reports Server (NTRS)
Prussing, John E.
1992-01-01
The case of two-impulse transfer between coplanar circular orbits is considered. The global optimality of the Hohmann transfer among the class of two-impulse transfers is proved via ordinary calculus by using the familiar orbital elements, eccentricity e and parameter (semilatus rectum) p. It is noted that this proof is simpler than existing proofs in the literature.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
A global optimization paradigm based on change of measures
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-01-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
Improved Particle Swarm Optimization for Global Optimization of Unimodal and Multimodal Functions
NASA Astrophysics Data System (ADS)
Basu, Mousumi
2015-07-01
Particle swarm optimization (PSO) performs well for small dimensional and less complicated problems but fails to locate global minima for complex multi-minima functions. This paper proposes an improved particle swarm optimization (IPSO) which introduces Gaussian random variables in velocity term. This improves search efficiency and guarantees a high probability of obtaining the global optimum without significantly impairing the speed of convergence and the simplicity of the structure of particle swarm optimization. The algorithm is experimentally validated on 17 benchmark functions and the results demonstrate good performance of the IPSO in solving unimodal and multimodal problems. Its high performance is verified by comparing with two popular PSO variants.
Obstetricians’ Opinions of the Optimal Caesarean Rate: A Global Survey
Cavallaro, Francesca L.; Cresswell, Jenny A.; Ronsmans, Carine
2016-01-01
Background The debate surrounding the optimal caesarean rate has been ongoing for several decades, with the WHO recommending an “acceptable” rate of 5–15% since 1997, despite a weak evidence base. Global expert opinion from obstetric care providers on the optimal caesarean rate has not been documented. The objective of this study was to examine providers’ opinions of the optimal caesarean rate worldwide, among all deliveries and within specific sub-groups of deliveries. Methods A global online survey of medical doctors who had performed at least one caesarean in the last five years was conducted between August 2013 and January 2014. Respondents were asked to report their opinion of the optimal caesarean rate—defined as the caesarean rate that would minimise poor maternal and perinatal outcomes—at the population level and within specific sub-groups of deliveries (including women with demographic and clinical risk factors for caesareans). Median reported optimal rates and corresponding inter-quartile ranges (IQRs) were calculated for the sample, and stratified according to national caesarean rate, institutional caesarean rate, facility level, and respondent characteristics. Results Responses were collected from 1,057 medical doctors from 96 countries. The median reported optimal caesarean rate was 20% (IQR: 15–30%) for all deliveries. Providers in private for-profit facilities and in facilities with high institutional rates reported optimal rates of 30% or above, while those in Europe, in public facilities and in facilities with low institutional rates reported rates of 15% or less. Reported optimal rates were lowest among low-risk deliveries and highest for Absolute Maternal Indications (AMIs), with wide IQRs observed for most categories other than AMIs. Conclusions Three-quarters of respondents reported an optimal caesarean rate above the WHO 15% upper threshold. There was substantial variation in responses, highlighting a lack of consensus around
Hybrid methods using genetic algorithms for global optimization.
Renders, J M; Flasse, S P
1996-01-01
This paper discusses the trade-off between accuracy, reliability and computing time in global optimization. Particular compromises provided by traditional methods (Quasi-Newton and Nelder-Mead's simplex methods) and genetic algorithms are addressed and illustrated by a particular application in the field of nonlinear system identification. Subsequently, new hybrid methods are designed, combining principles from genetic algorithms and "hill-climbing" methods in order to find a better compromise to the trade-off. Inspired by biology and especially by the manner in which living beings adapt themselves to their environment, these hybrid methods involve two interwoven levels of optimization, namely evolution (genetic algorithms) and individual learning (Quasi-Newton), which cooperate in a global process of optimization. One of these hybrid methods appears to join the group of state-of-the-art global optimization methods: it combines the reliability properties of the genetic algorithms with the accuracy of Quasi-Newton method, while requiring a computation time only slightly higher than the latter. PMID:18263027
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Global optimization in systems biology: stochastic methods and their applications.
Balsa-Canto, Eva; Banga, J R; Egea, J A; Fernandez-Villaverde, A; de Hijas-Liste, G M
2012-01-01
Mathematical optimization is at the core of many problems in systems biology: (1) as the underlying hypothesis for model development, (2) in model identification, or (3) in the computation of optimal stimulation procedures to synthetically achieve a desired biological behavior. These problems are usually formulated as nonlinear programing problems (NLPs) with dynamic and algebraic constraints. However the nonlinear and highly constrained nature of systems biology models, together with the usually large number of decision variables, can make their solution a daunting task, therefore calling for efficient and robust optimization techniques. Here, we present novel global optimization methods and software tools such as cooperative enhanced scatter search (eSS), AMIGO, or DOTcvpSB, and illustrate their possibilities in the context of modeling including model identification and stimulation design in systems biology. PMID:22161343
Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage
NASA Technical Reports Server (NTRS)
Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.
2010-01-01
The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.
Analysis of optimal and near-optimal continuous-thrust transfer problems in general circular orbit
NASA Astrophysics Data System (ADS)
Kéchichian, Jean A.
2009-09-01
A pair of practical problems in optimal continuous-thrust transfer in general circular orbit is analyzed within the context of analytic averaging for rapid computations leading to near-optimal solutions. The first problem addresses the minimum-time transfer between inclined circular orbits by proposing an analytic solution based on a split-sequence strategy in which the equatorial inclination and node controls are done separately by optimally selecting the intermediate orbit size at the sequence switch point that results in the minimum-time transfer. The consideration of the equatorial inclination and node state variables besides the orbital velocity variable is needed to further account for the important J2 perturbation that precesses the orbit plane during the transfer, unlike the thrust-only case in which it is sufficient to consider the relative inclination and velocity variables thus reducing the dimensionality of the system equations. Further extensions of the split-sequence strategy with analytic J2 effect are thus possible for equal computational ease. The second problem addresses the maximization of the equatorial inclination in fixed time by adopting a particular thrust-averaging scheme that controls only the inclination and velocity variables, leaving the node at the mercy of the J2 precession, providing robust fast-converging codes that lead to efficient near-optimal solutions. Example transfers for both sets of problems are solved showing near-optimal features as far as transfer time is concerned, by directly comparing the solutions to "exact" purely numerical counterparts that rely on precision integration of the raw unaveraged system dynamics with continuously varying thrust vector orientation in three-dimensional space.
Globally consistent registration of terrestrial laser scans via graph optimization
NASA Astrophysics Data System (ADS)
Theiler, Pascal Willy; Wegner, Jan Dirk; Schindler, Konrad
2015-11-01
In this paper we present a framework for the automatic registration of multiple terrestrial laser scans. The proposed method can handle arbitrary point clouds with reasonable pairwise overlap, without knowledge about their initial orientation and without the need for artificial markers or other specific objects. The framework is divided into a coarse and a fine registration part, which each start with pairwise registration and then enforce consistent global alignment across all scans. While we put forward a complete, functional registration system, the novel contribution of the paper lies in the coarse global alignment step. Merging multiple scans into a consistent network creates loops along which the relative transformations must add up. We pose the task of finding a global alignment as picking the best candidates from a set of putative pairwise registrations, such that they satisfy the loop constraints. This yields a discrete optimization problem that can be solved efficiently with modern combinatorial methods. Having found a coarse global alignment in this way, the framework proceeds by pairwise refinement with standard ICP, followed by global refinement to evenly spread the residual errors. The framework was tested on six challenging, real-world datasets. The discrete global alignment step effectively detects, removes and corrects failures of the pairwise registration procedure, finally producing a globally consistent coarse scan network which can be used as initial guess for the highly non-convex refinement. Our overall system reaches success rates close to 100% at acceptable runtimes < 1 h, even in challenging conditions such as scanning in the forest.
Efficient global optimization of a limited parameter antenna design
NASA Astrophysics Data System (ADS)
O'Donnell, Teresa H.; Southall, Hugh L.; Kaanta, Bryan
2008-04-01
Efficient Global Optimization (EGO) is a competent evolutionary algorithm suited for problems with limited design parameters and expensive cost functions. Many electromagnetics problems, including some antenna designs, fall into this class, as complex electromagnetics simulations can take substantial computational effort. This makes simple evolutionary algorithms such as genetic algorithms or particle swarms very time-consuming for design optimization, as many iterations of large populations are usually required. When physical experiments are necessary to perform tradeoffs or determine effects which may not be simulated, use of these algorithms is simply not practical at all due to the large numbers of measurements required. In this paper we first present a brief introduction to the EGO algorithm. We then present the parasitic superdirective two-element array design problem and results obtained by applying EGO to obtain the optimal element separation and operating frequency to maximize the array directivity. We compare these results to both the optimal solution and results obtained by performing a similar optimization using the Nelder-Mead downhill simplex method. Our results indicate that, unlike the Nelder-Mead algorithm, the EGO algorithm did not become stuck in local minima but rather found the area of the correct global minimum. However, our implementation did not always drill down into the precise minimum and the addition of a local search technique seems to be indicated.
New methods for large scale local and global optimization
NASA Astrophysics Data System (ADS)
Byrd, Richard; Schnabel, Robert
1994-07-01
We have pursued all three topics described in the proposal during this research period. A large amount of effort has gone into the development of large scale global optimization methods for molecular configuration problems. We have developed new general purpose methods that combine efficient stochastic global optimization techniques with several new, more deterministic techniques that account for most of the computational effort, and the success, of the methods. We have applied our methods to Lennard-Jones problems with up to 75 atoms, to water clusters with up to 31, molecules, and polymers with up to 58 amino acids. The results appear to be the best so far by general purpose optimization methods, and appear to be leading to some interesting chemistry issues. Our research on the second topic, tensor methods, has addressed several areas. We have designed and implemented tensor methods for large sparse systems of nonlinear equations and nonlinear least squares, and have obtained excellent test results on a wide range of problems. We have also developed new tensor methods for nonlinearly constrained optimization problem, and have obtained promising theoretical and preliminary computational results. Finally, on the third topic, limited memory methods for large scale optimization, we have developed and implemented new, extremely efficient limited memory methods for bound constrained problems, and new limited memory trust regions methods, both using our-recently developed compact representations for quasi-Newton matrices. Computational test results for both methods are promising.
p-MEMPSODE: Parallel and irregular memetic global optimization
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Parsopoulos, K. E.; Papageorgiou, D. G.; Lagaris, I. E.; Vrahatis, M. N.
2015-12-01
A parallel memetic global optimization algorithm suitable for shared memory multicore systems is proposed and analyzed. The considered algorithm combines two well-known and widely used population-based stochastic algorithms, namely Particle Swarm Optimization and Differential Evolution, with two efficient and parallelizable local search procedures. The sequential version of the algorithm was first introduced as MEMPSODE (MEMetic Particle Swarm Optimization and Differential Evolution) and published in the CPC program library. We exploit the inherent and highly irregular parallelism of the memetic global optimization algorithm by means of a dynamic and multilevel approach based on the OpenMP tasking model. In our case, tasks correspond to local optimization procedures or simple function evaluations. Parallelization occurs at each iteration step of the memetic algorithm without affecting its searching efficiency. The proposed implementation, for the same random seed, reaches the same solution irrespectively of being executed sequentially or in parallel. Extensive experimental evaluation has been performed in order to illustrate the speedup achieved on a shared-memory multicore server.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Imperialist competitive algorithm combined with chaos for global optimization
NASA Astrophysics Data System (ADS)
Talatahari, S.; Farahmand Azar, B.; Sheikholeslami, R.; Gandomi, A. H.
2012-03-01
A novel chaotic improved imperialist competitive algorithm (CICA) is presented for global optimization. The ICA is a new meta-heuristic optimization developed based on a socio-politically motivated strategy and contains two main steps: the movement of the colonies and the imperialistic competition. Here different chaotic maps are utilized to improve the movement step of the algorithm. Seven different chaotic maps are investigated and the Logistic and Sinusoidal maps are found as the best choices. Comparing the new algorithm with the other ICA-based methods demonstrates the superiority of the CICA for the benchmark functions.
Global optimization using the y-ybar diagram
NASA Astrophysics Data System (ADS)
Brown, Daniel M.
1991-12-01
Software is under development at Teledyne Brown Engineering to represent a lens configuration as a y-ybar or Delano diagram. The program determines third-order Seidel and chromatic aberrations for each configuration. It performs a global search through all valid permutations of configuration space and determines, to within a step increment of the space, the configuration with smallest third-order aberrations. The program was developed to generate first-order optical layouts which promised to reach global minima during subsequent conventional optimization. Other operations allowed by the program are: add or delete surfaces, couple surfaces (for Mangin mirrors), shift the stop position, and display first-order properties and the optical layout (surface radii and thicknesses) for subsequent entry into a conventional lens-design program with automatic optimization. Algorithms for performing some of the key functions, not covered by previous authors, are discussed in this paper.
Multi-fidelity global design optimization including parallelization potential
NASA Astrophysics Data System (ADS)
Cox, Steven Edward
The DIRECT global optimization algorithm is a relatively new space partitioning algorithm designed to determine the globally optimal design within a designated design space. This dissertation examines the applicability of the DIRECT algorithm to two classes of design problems: unimodal functions where small amplitude, high frequency fluctuations in the objective function make optimization difficult; and multimodal functions where multiple local optima are formed by the underlying physics of the problem (as opposed to minor fluctuations in the analysis code). DIRECT is compared with two other multistart local optimization techniques on two polynomial test problems and one engineering conceptual design problem. Three modifications to the DIRECT algorithm are proposed to increase the effectiveness of the algorithm. The DIRECT-BP algorithm is presented which alters the way DIRECT searches the neighborhood of the current best point as optimization progresses. The algorithm reprioritizes which points to analyze at each iteration. This is to encourage analysis of points that surround the best point but that are farther away than the points selected by the DIRECT algorithm. This increases the robustness of the DIRECT search and provides more information on the characteristics of the neighborhood of the point selected as the global optimum. A multifidelity version of the DIRECT algorithm is proposed to reduce the cost of optimization using DIRECT. By augmenting expensive high-fidelity analysis with cheap low-fidelity analysis, the optimization can be performed with fewer high-fidelity analyses. Two correction schemes are examined using high- and low-fidelity results at one point to correct the low-fidelity result at a nearby point. This corrected value is then used in place of a high-fidelity analysis by the DIRECT algorithm. In this way the number of high-fidelity analyses required is reduced and the optimization became less expensive. Finally the DIRECT algorithm is
Asynchronous global optimization techniques for medium and large inversion problems
Pereyra, V.; Koshy, M.; Meza, J.C.
1995-04-01
We discuss global optimization procedures adequate for seismic inversion problems. We explain how to save function evaluations (which may involve large scale ray tracing or other expensive operations) by creating a data base of information on what parts of parameter space have already been inspected. It is also shown how a correct parallel implementation using PVM speeds up the process almost linearly with respect to the number of processors, provided that the function evaluations are expensive enough to offset the communication overhead.
Emergence of Global Shape Processing Continues through Adolescence
ERIC Educational Resources Information Center
Scherf, K. Suzanne; Behrmann, Marlene; Kimchi, Ruth; Luna, Beatriz
2009-01-01
The developmental trajectory of perceptual organization in humans is unclear. This study investigated perceptual grouping abilities across a wide age range (8-30 years) using a classic compound letter global/local (GL) task and a more fine-grained microgenetic prime paradigm (MPP) with both few- and many-element hierarchical displays. In the GL…
Globally optimal surface mapping for surfaces with arbitrary topology.
Li, Xin; Bao, Yunfan; Guo, Xiaohu; Jin, Miao; Gu, Xianfeng; Qin, Hong
2008-01-01
Computing smooth and optimal one-to-one maps between surfaces of same topology is a fundamental problem in computer graphics and such a method provides us a ubiquitous tool for geometric modeling and data visualization. Its vast variety of applications includes shape registration/matching, shape blending, material/data transfer, data fusion, information reuse, etc. The mapping quality is typically measured in terms of angular distortions among different shapes. This paper proposes and develops a novel quasi-conformal surface mapping framework to globally minimize the stretching energy inevitably introduced between two different shapes. The existing state-of-the-art inter-surface mapping techniques only afford local optimization either on surface patches via boundary cutting or on the simplified base domain, lacking rigorous mathematical foundation and analysis. We design and articulate an automatic variational algorithm that can reach the global distortion minimum for surface mapping between shapes of arbitrary topology, and our algorithm is sorely founded upon the intrinsic geometry structure of surfaces. To our best knowledge, this is the first attempt towards numerically computing globally optimal maps. Consequently, our mapping framework offers a powerful computational tool for graphics and visualization tasks such as data and texture transfer, shape morphing, and shape matching. PMID:18467756
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
A Modified Differential Evolution Algorithm with Cauchy Mutation for Global Optimization
NASA Astrophysics Data System (ADS)
Ali, Musrrat; Pant, Millie; Singh, Ved Pal
Differential Evolution (DE) is a powerful yet simple evolutionary algorithm for optimization of real valued, multi modal functions. DE is generally considered as a reliable, accurate and robust optimization technique. However, the algorithm suffers from premature convergence, slow convergence rate and large computational time for optimizing the computationally expensive objective functions. Therefore, an attempt to speed up DE is considered necessary. This research introduces a modified differential evolution (MDE), a modification to DE that enhances the convergence rate without compromising with the solution quality. In Modified differential evolution (MDE) algorithm, if an individual fails in continuation to improve its performance to a specified number of times then new point is generated using Cauchy mutation. MDE on a test bed of functions is compared with original DE. It is found that MDE requires less computational effort to locate global optimal solution.
Global structual optimizations of surface systems with a genetic algorithm
Chuang, Feng-Chuan
2005-05-01
Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al{sub n} (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of {radical}3 x {radical}3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
A Global Optimization Approach to Multi-Polarity Sentiment Analysis
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Solving Globally-Optimal Threading Problems in ''Polynomial-Time''
Uberbacher, E.C.; Xu, D.; Xu, Y.
1999-04-12
Computational protein threading is a powerful technique for recognizing native-like folds of a protein sequence from a protein fold database. In this paper, we present an improved algorithm (over our previous work) for solving the globally-optimal threading problem, and illustrate how the computational complexity and the fold recognition accuracy of the algorithm change as the cutoff distance for pairwise interactions changes. For a given fold of m residues and M core secondary structures (or simply cores) and a protein sequence of n residues, the algorithm guarantees to find a sequence-fold alignment (threading) that is globally optimal, measured collectively by (1) the singleton match fitness, (2) pairwise interaction preference, and (3) alignment gap penalties, in O(mn + MnN{sup 1.5C-1}) time and O(mn + nN{sup C-1}) space. C, the topological complexity of a fold as we term, is a value which characterizes the overall structure of the considered pairwise interactions in the fold, which are typically determined by a specified cutoff distance between the beta carbon atoms of a pair of amino acids in the fold. C is typically a small positive integer. N represents the maximum number of possible alignments between an individual core of the fold and the protein sequence when its neighboring cores are already aligned, and its value is significantly less than n. When interacting amino acids are required to see each other, C is bounded from above by a small integer no matter how large the cutoff distance is. This indicates that the protein threading problem is polynomial-time solvable if the condition of seeing each other between interacting amino acids is sufficient for accurate fold recognition. A number of extensions have been made to our basic threading algorithm to allow finding a globally-optimal threading under various constraints, which include consistencies with (1) specified secondary structures (both cores and loops), (2) disulfide bonds, (3) active sites, etc.
PROSPECT: A Computer System for Globally-Optimal Threading
Xu, D.; Xu, Y.
1999-08-06
This paper presents a new computer system, PROSPECT, for protein threading. PROSPECT employs an energy function that consists of three additive terms: (1) a singleton fitness term, (2) a distance-dependent pairwise-interaction preference term, and (3) alignment gap penalty; and currently uses FSSP as its threading template database. PROSPECT uses a divide-and-conquer algorithm to find an alignment between a query protein sequence and a protein fold template, which is guaranteed to be globally optimal for its energy function. The threading algorithm presented here significantly improves the computational efficiency of our previously-published algorithm, which makes PROSPECT a practical tool even for large protein threading problems. Mathematically, PROSPECT finds a globally-optimal threading between a query sequence of n residues and a fold template of m residues and M core secondary structures in O(nm + MnN{sup 1.5C{minus}1}) time and O(nm + nN{sup C{minus}1}) space, where C, the topological complexity of the template fold as we term, is a value which characterizes the overall structure of the considered pairwise interactions in the fold; and N represents the maximum number of possible alignments between an individual core of the fold and the query sequence when its neighboring cores are already aligned. PROSPECT allows a user to incorporate known biological constraints about the query sequence during the threading process. For given constraints, the system finds a globally-optimal threading which satisfies the constraints. Currently PROSPECT can deal with constraints which reflect geometrical relationships among residues of disulfide bonds, active sites, or determined by the NOE constraints of (low-resolution) NMR spectral data.
Global optimization of minority game by intelligent agents
NASA Astrophysics Data System (ADS)
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao
2005-10-01
We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
hp-Pseudospectral method for solving continuous-time nonlinear optimal control problems
NASA Astrophysics Data System (ADS)
Darby, Christopher L.
2011-12-01
In this dissertation, a direct hp-pseudospectral method for approximating the solution to nonlinear optimal control problems is proposed. The hp-pseudospectral method utilizes a variable number of approximating intervals and variable-degree polynomial approximations of the state within each interval. Using the hp-discretization, the continuous-time optimal control problem is transcribed to a finite-dimensional nonlinear programming problem (NLP). The differential-algebraic constraints of the optimal control problem are enforced at a finite set of collocation points, where the collocation points are either the Legendre-Gauss or Legendre-Gauss-Radau quadrature points. These sets of points are chosen because they correspond to high-accuracy Gaussian quadrature rules for approximating the integral of a function. Moreover, Runge phenomenon for high-degree Lagrange polynomial approximations to the state is avoided by using these points. The key features of the hp-method include computational sparsity associated with low-order polynomial approximations and rapid convergence rates associated with higher-degree polynomials approximations. Consequently, the hp-method is both highly accurate and computationally efficient. Two hp-adaptive algorithms are developed that demonstrate the utility of the hp-approach. The algorithms are shown to accurately approximate the solution to general continuous-time optimal control problems in a computationally efficient manner without a priori knowledge of the solution structure. The hp-algorithms are compared empirically against local (h) and global (p) collocation methods over a wide range of problems and are found to be more efficient and more accurate. The hp-pseudospectral approach developed in this research not only provides a high-accuracy approximation to the state and control of an optimal control problem, but also provides high-accuracy approximations to the costate of the optimal control problem. The costate is approximated by
Human-Like Rule Optimization for Continuous Domains
NASA Astrophysics Data System (ADS)
Hadzic, Fedja; Dillon, Tharam S.
When using machine learning techniques for data mining purposes one of the main requirements is that the learned rule set is represented in a comprehensible form. Simpler rules are preferred as they are expected to perform better on unseen data. At the same time the rules should be specific enough so that the misclassification rate is kept to a minimum. In this paper we present a rule optimizing technique motivated by the psychological studies of human concept learning. The technique allows for reasoning to happen at both higher levels of abstraction and lower level of detail in order to optimize the rule set. Information stored at the higher level allows for optimizing processes such as rule splitting, merging and deleting, while the information stored at the lower level allows for determining the attribute relevance for a particular rule. The attributes detected as irrelevant can be removed and the ones previously detected as irrelevant can be reintroduced if necessary. The method is evaluated on the rules extracted from publicly available real world datasets using different classifiers, and the results demonstrate the effectiveness of the presented rule optimizing technique.
Optimizing a global alignment of protein interaction networks
Chindelevitch, Leonid; Ma, Cheng-Yu; Liao, Chung-Shou; Berger, Bonnie
2013-01-01
Motivation: The global alignment of protein interaction networks is a widely studied problem. It is an important first step in understanding the relationship between the proteins in different species and identifying functional orthologs. Furthermore, it can provide useful insights into the species’ evolution. Results: We propose a novel algorithm, PISwap, for optimizing global pairwise alignments of protein interaction networks, based on a local optimization heuristic that has previously demonstrated its effectiveness for a variety of other intractable problems. PISwap can begin with different types of network alignment approaches and then iteratively adjust the initial alignments by incorporating network topology information, trading it off for sequence information. In practice, our algorithm efficiently refines other well-studied alignment techniques with almost no additional time cost. We also show the robustness of the algorithm to noise in protein interaction data. In addition, the flexible nature of this algorithm makes it suitable for different applications of network alignment. This algorithm can yield interesting insights into the evolutionary dynamics of related species. Availability: Our software is freely available for non-commercial purposes from our Web site, http://piswap.csail.mit.edu/. Contact: bab@csail.mit.edu or csliao@ie.nthu.edu.tw Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24048352
Spectral Approach to Optimal Estimation of the Global Average Temperature.
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.
Spectral approach to optimal estimation of the global average temperature
Shen, S.S.P.; North, G.R.; Kim, K.Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.
New Algorithms for Global Optimization and Reaction Path Determination.
Weber, D; Bellinger, D; Engels, B
2016-01-01
We present new schemes to improve the convergence of an important global optimization problem and to determine reaction pathways (RPs) between identified minima. Those methods have been implemented into the CAST program (Conformational Analysis and Search Tool). The first part of this chapter shows how to improve convergence of the Monte Carlo with minimization (MCM, also known as Basin Hopping) method when applied to optimize water clusters or aqueous solvation shells using a simple model. Since the random movement on the potential energy surface (PES) is an integral part of MCM, we propose to employ a hydrogen bonding-based algorithm for its improvement. We show comparisons of the results obtained for random dihedral and for the proposed random, rigid-body water molecule movement, giving evidence that a specific adaption of the distortion process greatly improves the convergence of the method. The second part is about the determination of RPs in clusters between conformational arrangements and for reactions. Besides standard approaches like the nudged elastic band method, we want to focus on a new algorithm developed especially for global reaction path search called Pathopt. We started with argon clusters, a typical benchmark system, which possess a flat PES, then stepwise increase the magnitude and directionality of interactions. Therefore, we calculated pathways for a water cluster and characterize them by frequency calculations. Within our calculations, we were able to show that beneath local pathways also additional pathways can be found which possess additional features. PMID:27497166
Parallel global optimization with the particle swarm algorithm
Schutte, J. F.; Reinbolt, J. A.; Fregly, B. J.; Haftka, R. T.; George, A. D.
2007-01-01
SUMMARY Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima—large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
Mechanical optimization of superconducting cavities in continuous wave operation
NASA Astrophysics Data System (ADS)
Posen, Sam; Liepe, Matthias
2012-02-01
Several planned accelerator facilities call for hundreds of elliptical cavities operating cw with low effective beam loading, and therefore require cavities that have been mechanically optimized to operate at high QL by minimizing df/dp, the sensitivity to microphonics detuning from fluctuations in helium pressure. Without such an optimization, the facilities would suffer either power costs driven up by millions of dollars or an extremely high per-cavity trip rate. ANSYS simulations used to predict df/dp are presented as well as a model that illustrates factors that contribute to this parameter in elliptical cavities. For the Cornell Energy Recovery Linac (ERL) main linac cavity, df/dp is found to range from 2.5 to 17.4Hz/mbar, depending on the radius of the stiffening rings, with minimal df/dp for very small or very large radii. For the Cornell ERL injector cavity, simulations predict a df/dp of 124Hz/mbar, which fits well within the range of measurements performed with the injector cryomodule. Several methods for reducing df/dp are proposed, including decreasing the diameter of the tuner bellows and increasing the stiffness of the enddishes and the tuner. Using measurements from a Tesla Test Facility cavity as the baseline, if both of these measures were implemented and the stiffening rings were optimized, simulations indicate that df/dp would be reduced from ˜30Hz/mbar to just 2.9Hz/mbar, and the power required to maintain the accelerating field would be reduced by an order of magnitude. Finally, other consequences of optimizing the stiffening ring radius are investigated. It is found that stiffening rings larger than 70% of the iris-equator distance make the cavity impossible to tune. Small rings, on the other hand, leave the cavity susceptible to plastic deformation during handling and have lower frequency mechanical resonances, which is undesirable for active compensation of microphonics. Additional simulations of Lorentz force detuning are discussed, and
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC)
NASA Astrophysics Data System (ADS)
Rad, Mary L.; Zou, Luyao; Sanders, James L.; Widicus Weaver, Susanna L.
2016-01-01
Context. Broadband receivers that operate at millimeter and submillimeter frequencies necessitate the development of new tools for spectral analysis and interpretation. Simultaneous, global, multimolecule, multicomponent analysis is necessary to accurately determine the physical and chemical conditions from line-rich spectra that arise from sources like hot cores. Aims: We aim to provide a robust and efficient automated analysis program to meet the challenges presented with the large spectral datasets produced by radio telescopes. Methods: We have written a program in the MATLAB numerical computing environment for simultaneous global analysis of broadband line surveys. The Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC) program uses the simplifying assumption of local thermodynamic equilibrium (LTE) for spectral analysis to determine molecular column density, temperature, and velocity information. Results: GOBASIC achieves simultaneous, multimolecule, multicomponent fitting for broadband spectra. The number of components that can be analyzed at once is only limited by the available computational resources. Analysis of subsequent sets of molecules or components is performed iteratively while taking the previous fits into account. All features of a given molecule across the entire window are fitted at once, which is preferable to the rotation diagram approach because global analysis is less sensitive to blended features and noise features in the spectra. In addition, the fitting method used in GOBASIC is insensitive to the initial conditions chosen, the fitting is automated, and fitting can be performed in a parallel computing environment. These features make GOBASIC a valuable improvement over previously available LTE analysis methods. A copy of the sofware is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/585/A23
Adjusting process count on demand for petascale global optimization
Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.; Haftka, Rafael T.; Trosset, Michael W.
2012-11-23
There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
Two-level global optimization for image segmentation
NASA Astrophysics Data System (ADS)
Pan, He-Ping
Domain-independent image segmentation is considered here as a global optimization problem: to seek the simplest description of a given input image in terms of coherent closed regions. The approach consists of two levels of processing: pixel-level and region-level, both based on the Minimum-Description-Length principle. Pixel-level processing leads to forming the atomic regions that are then labelled. In region-level processing neighbouring regions are merged into larger ones using an explicit attributed graph evolution mechanism. Both level processings are stopped automatically without using any heuristic control parameters. Experiments are carried out with a number of images of different scene types. Parallel implementation of region-level processing is the most difficult problem to be solved for the operational application of this approach.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. PMID:25779670
Optimized continuous pharmaceutical manufacturing via model-predictive control.
Rehrl, Jakob; Kruisz, Julia; Sacher, Stephan; Khinast, Johannes; Horn, Martin
2016-08-20
This paper demonstrates the application of model-predictive control to a feeding blending unit used in continuous pharmaceutical manufacturing. The goal of this contribution is, on the one hand, to highlight the advantages of the proposed concept compared to conventional PI-controllers, and, on the other hand, to present a step-by-step guide for controller synthesis. The derivation of the required mathematical plant model is given in detail and all the steps required to develop a model-predictive controller are shown. Compared to conventional concepts, the proposed approach allows to conveniently consider constraints (e.g. mass hold-up in the blender) and offers a straightforward, easy to tune controller setup. The concept is implemented in a simulation environment. In order to realize it on a real system, additional aspects (e.g., state estimation, measurement equipment) will have to be investigated. PMID:27317987
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. PMID:24986530
ABCluster: the artificial bee colony algorithm for cluster global optimization.
Zhang, Jun; Dolg, Michael
2015-10-01
Global optimization of cluster geometries is of fundamental importance in chemistry and an interesting problem in applied mathematics. In this work, we introduce a relatively new swarm intelligence algorithm, i.e. the artificial bee colony (ABC) algorithm proposed in 2005, to this field. It is inspired by the foraging behavior of a bee colony, and only three parameters are needed to control it. We applied it to several potential functions of quite different nature, i.e., the Coulomb-Born-Mayer, Lennard-Jones, Morse, Z and Gupta potentials. The benchmarks reveal that for long-ranged potentials the ABC algorithm is very efficient in locating the global minimum, while for short-ranged ones it is sometimes trapped into a local minimum funnel on a potential energy surface of large clusters. We have released an efficient, user-friendly, and free program "ABCluster" to realize the ABC algorithm. It is a black-box program for non-experts as well as experts and might become a useful tool for chemists to study clusters. PMID:26327507
Optimizing global liver function in radiation therapy treatment planning
NASA Astrophysics Data System (ADS)
Wu, Victor W.; Epelman, Marina A.; Wang, Hesheng; Romeijn, H. Edwin; Feng, Mary; Cao, Yue; Ten Haken, Randall K.; Matuszak, Martha M.
2016-09-01
Liver stereotactic body radiation therapy (SBRT) patients differ in both pre-treatment liver function (e.g. due to degree of cirrhosis and/or prior treatment) and radiosensitivity, leading to high variability in potential liver toxicity with similar doses. This work investigates three treatment planning optimization models that minimize risk of toxicity: two consider both voxel-based pre-treatment liver function and local-function-based radiosensitivity with dose; one considers only dose. Each model optimizes different objective functions (varying in complexity of capturing the influence of dose on liver function) subject to the same dose constraints and are tested on 2D synthesized and 3D clinical cases. The normal-liver-based objective functions are the linearized equivalent uniform dose (\\ell \\text{EUD} ) (conventional ‘\\ell \\text{EUD} model’), the so-called perfusion-weighted \\ell \\text{EUD} (\\text{fEUD} ) (proposed ‘fEUD model’), and post-treatment global liver function (GLF) (proposed ‘GLF model’), predicted by a new liver-perfusion-based dose-response model. The resulting \\ell \\text{EUD} , fEUD, and GLF plans delivering the same target \\ell \\text{EUD} are compared with respect to their post-treatment function and various dose-based metrics. Voxel-based portal venous liver perfusion, used as a measure of local function, is computed using DCE-MRI. In cases used in our experiments, the GLF plan preserves up to 4.6 % ≤ft(7.5 % \\right) more liver function than the fEUD (\\ell \\text{EUD} ) plan does in 2D cases, and up to 4.5 % ≤ft(5.6 % \\right) in 3D cases. The GLF and fEUD plans worsen in \\ell \\text{EUD} of functional liver on average by 1.0 Gy and 0.5 Gy in 2D and 3D cases, respectively. Liver perfusion information can be used during treatment planning to minimize the risk of toxicity by improving expected GLF; the degree of benefit varies with perfusion pattern. Although fEUD model optimization is computationally inexpensive and
Optimizing global liver function in radiation therapy treatment planning.
Wu, Victor W; Epelman, Marina A; Wang, Hesheng; Edwin Romeijn, H; Feng, Mary; Cao, Yue; Ten Haken, Randall K; Matuszak, Martha M
2016-09-01
Liver stereotactic body radiation therapy (SBRT) patients differ in both pre-treatment liver function (e.g. due to degree of cirrhosis and/or prior treatment) and radiosensitivity, leading to high variability in potential liver toxicity with similar doses. This work investigates three treatment planning optimization models that minimize risk of toxicity: two consider both voxel-based pre-treatment liver function and local-function-based radiosensitivity with dose; one considers only dose. Each model optimizes different objective functions (varying in complexity of capturing the influence of dose on liver function) subject to the same dose constraints and are tested on 2D synthesized and 3D clinical cases. The normal-liver-based objective functions are the linearized equivalent uniform dose ([Formula: see text]) (conventional '[Formula: see text] model'), the so-called perfusion-weighted [Formula: see text] ([Formula: see text]) (proposed 'fEUD model'), and post-treatment global liver function (GLF) (proposed 'GLF model'), predicted by a new liver-perfusion-based dose-response model. The resulting [Formula: see text], fEUD, and GLF plans delivering the same target [Formula: see text] are compared with respect to their post-treatment function and various dose-based metrics. Voxel-based portal venous liver perfusion, used as a measure of local function, is computed using DCE-MRI. In cases used in our experiments, the GLF plan preserves up to [Formula: see text] more liver function than the fEUD ([Formula: see text]) plan does in 2D cases, and up to [Formula: see text] in 3D cases. The GLF and fEUD plans worsen in [Formula: see text] of functional liver on average by 1.0 Gy and 0.5 Gy in 2D and 3D cases, respectively. Liver perfusion information can be used during treatment planning to minimize the risk of toxicity by improving expected GLF; the degree of benefit varies with perfusion pattern. Although fEUD model optimization is computationally inexpensive and often
Romero, Vicente Jose; Ayon, Douglas V.; Chen, Chun-Hung
2003-09-01
A very general and robust approach to solving optimization problems involving probabilistic uncertainty is through the use of Probabilistic Ordinal Optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the probabilistic merits of local design alternatives, rather than on crisp quantification of the alternatives. Thus, we simply ask the question: 'Is that alternative better or worse than this one?' to some level of statistical confidence we require, not: 'HOW MUCH better or worse is that alternative to this one?'. In this paper we illustrate an elementary application of probabilistic ordinal concepts in a 2-D optimization problem. Two uncertain variables contribute to uncertainty in the response function. We use a simple Coordinate Pattern Search non-gradient-based optimizer to step toward the statistical optimum in the design space. We also discuss more sophisticated implementations, and some of the advantages and disadvantages versus non-ordinal approaches for optimization under uncertainty.
Environmental optimization of continuous flow ozonation for urban wastewater reclamation.
Rodríguez, Antonio; Muñoz, Iván; Perdigón-Melón, José A; Carbajo, José B; Martínez, María J; Fernández-Alba, Amadeo R; García-Calvo, Eloy; Rosal, Roberto
2012-10-15
Wastewater samples from the secondary clarifier of two treatment plants were spiked in the microgram-to-tens-of-microgram per liter range with diuron (herbicide), ibuprofen and diclofenac (anti-inflammatory drugs), sulfamethoxazole and erythromycin (antibiotics), bezafibrate and gemfibrozil (lipid regulators), atenolol (β-blocker), carbamazepine (anti-epileptic), hydrochlorothiazide (diuretic), caffeine (stimulant) and N-acetyl-4-amino-antipiryne, a metabolite of the antipyretic drug dypirone. They were subsequently ozonated in continuous flow using 1.2L lab-scale bubble columns. The concentration of all spiking compounds was monitored in the outlet stream. The effects of varying ozone input, expressed as energy per unit volume, and water flow rate, and of using single or double column were studied in relation to the efficiency of ozone usage and the ratio of pollutant depletion. The ozone dosage required to treat both wastewaters with pollutant depletion of >90% was in the 5.5-8.5 mg/L range with ozone efficiencies greater than 80% depending on the type of wastewater and the operating conditions. This represented 100-200 mol of ozone transferred per mole of pollutant removed. Direct and indirect environmental impacts of ozonation were assessed according to Life Cycle Assessment, a technique that helped identify the most effective treatments in terms of potential toxicity reduction, as well as of toxicity reduction per unit mass of greenhouse-gas emissions, which were used as an indicator of environmental efficiency. A trade-off between environmental effectiveness (toxicity reduction) and greenhouse-gas emissions was observed since maximizing toxicity removal led to higher greenhouse-gas emissions, due to the latter's relatively high ozone requirements. Also, there is an environmental trade-off between effectiveness and efficiency. Our results indicate that an efficient use of ozone was not compatible with a full pollutant removal. PMID:22922131
Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam
2014-01-01
This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature. PMID:24955420
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
The L_infinity constrained global optimal histogram equalization technique for real time imaging
NASA Astrophysics Data System (ADS)
Ren, Qiongwei; Niu, Yi; Liu, Lin; Jiao, Yang; Shi, Guangming
2015-08-01
Although the current imaging sensors can achieve 12 or higher precision, the current display devices and the commonly used digital image formats are still only 8 bits. This mismatch causes significant waste of the sensor precision and loss of information when storing and displaying the images. For better usage of the precision-budget, tone mapping operators have to be used to map the high-precision data into low-precision digital images adaptively. In this paper, the classic histogram equalization tone mapping operator is reexamined in the sense of optimization. We point out that the traditional histogram equalization technique and its variants are fundamentally improper by suffering from local optimum problems. To overcome this drawback, we remodel the histogram equalization tone mapping task based on graphic theory which achieves the global optimal solutions. Another advantage of the graphic-based modeling is that the tone-continuity is also modeled as a vital constraint in our approach which suppress the annoying boundary artifacts of the traditional approaches. In addition, we propose a novel dynamic programming technique to solve the histogram equalization problem in real time. Experimental results shows that the proposed tone-preserved global optimal histogram equalization technique outperforms the traditional approaches by exhibiting more subtle details in the foreground while preserving the smoothness of the background.
Global and Local Sparse Subspace Optimization for Motion Segmentation
NASA Astrophysics Data System (ADS)
Yang, M. Ying; Feng, S.; Ackermann, H.; Rosenhahn, B.
2015-08-01
In this paper, we propose a new framework for segmenting feature-based moving objects under affine subspace model. Since the feature trajectories in practice are high-dimensional and contain a lot of noise, we firstly apply the sparse PCA to represent the original trajectories with a low-dimensional global subspace, which consists of the orthogonal sparse principal vectors. Subsequently, the local subspace separation will be achieved via automatically searching the sparse representation of the nearest neighbors for each projected data. In order to refine the local subspace estimation result, we propose an error estimation to encourage the projected data that span a same local subspace to be clustered together. In the end, the segmentation of different motions is achieved through the spectral clustering on an affinity matrix, which is constructed with both the error estimation and sparse neighbors optimization. We test our method extensively and compare it with state-of-the-art methods on the Hopkins 155 dataset. The results show that our method is comparable with the other motion segmentation methods, and in many cases exceed them in terms of precision and computation time.
Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.
Adesso, Gerardo; Illuminati, Fabrizio
2005-10-01
We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network. PMID:16241708
Li, Xingyuan; He, Zhili; Zhou, Jizhong
2005-01-01
The oligonucleotide specificity for microarray hybridization can be predicted by its sequence identity to non-targets, continuous stretch to non-targets, and/or binding free energy to non-targets. Most currently available programs only use one or two of these criteria, which may choose ‘false’ specific oligonucleotides or miss ‘true’ optimal probes in a considerable proportion. We have developed a software tool, called CommOligo using new algorithms and all three criteria for selection of optimal oligonucleotide probes. A series of filters, including sequence identity, free energy, continuous stretch, GC content, self-annealing, distance to the 3′-untranslated region (3′-UTR) and melting temperature (Tm), are used to check each possible oligonucleotide. A sequence identity is calculated based on gapped global alignments. A traversal algorithm is used to generate alignments for free energy calculation. The optimal Tm interval is determined based on probe candidates that have passed all other filters. Final probes are picked using a combination of user-configurable piece-wise linear functions and an iterative process. The thresholds for identity, stretch and free energy filters are automatically determined from experimental data by an accessory software tool, CommOligo_PE (CommOligo Parameter Estimator). The program was used to design probes for both whole-genome and highly homologous sequence data. CommOligo and CommOligo_PE are freely available to academic users upon request. PMID:16246912
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results. PMID:26111400
Liu, Liqiang; Dai, Yuntao; Gao, Jinyu
2014-01-01
Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402
Rational design and optimization of fed-batch and continuous fermentations.
Zhang, Wenhui; Inan, Mehmet; Meagher, Michael M
2007-01-01
This chapter provides rational approaches to design and optimize fed-batch and continuous fermentations of both Mut+ and Muts (methanol utilization plus and slow) Pichia pastoris strains. The methods are described in detail for glycerol batch, glycerol fed-batch, transition, and methanol fed-batch/mixed feed/ continuous stirred tank reactor (CSTR) phases of the process based on glycerol and methanol consumption models. Cell density, broth volume, substrate feed rate, and the length of each phase are rationally designed to conduct runs with selected parameters for optimizing a process. The optimization is anchored by the impact of a specific growth rate/dilution time (for CSTRs) on productivity. Equations for simulation of a process with optimal parameters are derived for an optimal process design. This protocol can be used as a practical manual for process development of a P. pastoris recombinant fermentation, and also as a reference for fermentation of other microorganisms. PMID:17951634
The global dynamics of a discrete juvenile-adult model with continuous and seasonal reproduction.
Ackleh, Azmy S; Chiquet, Ross A
2009-03-01
A general discrete juvenile-adult population model with time-dependent birth rate and nonlinear survivorship rates is studied. When breeding is continuous, it is shown that the model has a unique globally asymptotically stable positive equilibrium provided the net reproductive number is larger than one. If it is smaller than one, then the extinction equilibrium is globally asymptotically stable. When breeding is seasonal, it is shown that there exists a unique globally asymptotically stable periodic solution provided the net reproductive number is larger than one. When this value is less than one, the population goes to extinction. Conditions on the birth rate where the population with seasonal breeding survives while the population with continuous breeding becomes extinct are provided. PMID:22880823
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Global optimization of fuel consumption in rendezvous scenarios by the method of interval analysis
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2015-03-01
To reduce the optimal but large Δv of the fixed-short-time two impulse Lambert rendezvous between two spacecrafts along two coplanar circular orbits, the three-impulse Lambert rendezvous optimized via the optimization algorithm-interval analysis (IA) is proposed in this paper. The purpose of optimization is to minimize the velocity increment of the fixed-short-time three-impulse Lambert rendezvous. The optimization algorithm IA is given for solving the rendezvous optimization problem with multiple uncertain variables, and strong nonlinearity and nonconvexity. Numerical examples of the time-open, coplanar-circular-orbit, multiple-revolution Lambert rendezvous with a parking time optimized via the optimization algorithm IA are firstly undertaken to validate the feasibility of the optimization algorithm IA by comparing the optimization results with those of the globally optimal Hohmann transfer. The results indicate that the globally optimal parameters of the time-open coplanar-circular-orbit multiple-revolution Lambert rendezvous can be obtained by the optimization algorithm IA, and the initial separation angle of two spacecrafts with different orbit radius can be adjusted to obtain the globally optimal and small Δv by distributing an optimal parking time. After that, for the fixed-short-time two-impulse Lambert rendezvous problem without sufficient time to adjust the separation angle by distributing a parking time like the open-time Lambert rendezvous problem, three-impulse Lambert rendezvous involving multiple optimization variables is given and the variables are optimized by the optimization algorithm IA to obtain an optimal and small Δv. Numerical simulation indicates that the optimal and small Δv of the fixed short time, three-impulse Lambert rendezvous can be obtained using the optimization algorithm IA.
Fournier, René; Mohareb, Amir
2016-01-14
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV. PMID:26772561
NASA Astrophysics Data System (ADS)
Fournier, René; Mohareb, Amir
2016-01-01
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.
Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.
Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha
2012-11-30
Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. PMID:22789478
Continuation of the NVAP Global Water Vapor Data Sets for Pathfinder Science Analysis
NASA Technical Reports Server (NTRS)
VonderHaar, Thomas H.; Engelen, Richard J.; Forsythe, John M.; Randel, David L.; Ruston, Benjamin C.; Woo, Shannon; Dodge, James (Technical Monitor)
2001-01-01
This annual report covers August 2000 - August 2001 under NASA contract NASW-0032, entitled "Continuation of the NVAP (NASA's Water Vapor Project) Global Water Vapor Data Sets for Pathfinder Science Analysis". NASA has created a list of Earth Science Research Questions which are outlined by Asrar, et al. Particularly relevant to NVAP are the following questions: (a) How are global precipitation, evaporation, and the cycling of water changing? (b) What trends in atmospheric constituents and solar radiation are driving global climate? (c) How well can long-term climatic trends be assessed or predicted? Water vapor is a key greenhouse gas, and an understanding of its behavior is essential in global climate studies. Therefore, NVAP plays a key role in addressing the above climate questions by creating a long-term global water vapor dataset and by updating the dataset with recent advances in satellite instrumentation. The NVAP dataset produced from 1988-1998 has found wide use in the scientific community. Studies of interannual variability are particularly important. A recent paper by Simpson, et al. that examined the NVAP dataset in detail has shown that its relative accuracy is sufficient for the variability studies that contribute toward meeting NASA's goals. In the past year, we have made steady progress towards continuing production of this high-quality dataset as well as performing our own investigations of the data. This report summarizes the past year's work on production of the NVAP dataset and presents results of analyses we have performed in the past year.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System
NASA Astrophysics Data System (ADS)
Bendjeghaba, Omar
2014-01-01
This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.
Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.
1979-01-01
Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417
Optimization of flow control devices in a single-strand slab continuous casting tundish
NASA Astrophysics Data System (ADS)
Ding, Ning; Bao, Yan-Ping; Sun, Qi-Song; Wang, Li-Feng
2011-06-01
The optimization of flow control devices in a single-slab continuous casting tundish was carried out by physical modeling, and the optimized scheme was presented. With the optimal tundish configuration, the minimum residence time of liquid steel was increased by 1.4 times, the peak concentration time was increased by 97%, and the dead volume fraction was decreased by 72%. A mathematical model for molten steel in the tundish was established by using the fluid dynamics package Fluent. The velocity field, concentration field, and the residence time distribution (RTD) curves of molten steel flow before and after optimization were obtained. Experimental results showed that the reasonable configuration with flow control devices can improve the fluid flow characteristics in the tundish. The results of industrial application show that the nonmetallic inclusion area ratio in casting slabs is decreased by 32% with the optimal tundish configuration.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center (ESTSC)
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Time dependence of breakdown in a global fiber-bundle model with continuous damage
Moral, L.; Moreno, Y.; Gomez, J. B.; Pacheco, A. F.
2001-06-01
A time-dependent global fiber-bundle model of fracture with continuous damage is formulated in terms of a set of coupled nonlinear differential equations. A first integral of this set is analytically obtained. The time evolution of the system is studied by applying a discrete probabilistic method. Several results are discussed emphasizing their differences with the standard time-dependent model. The results obtained show that with this simple model a variety of experimental observations can be qualitatively reproduced.
New Tabu Search based global optimization methods outline of algorithms and study of efficiency.
Stepanenko, Svetlana; Engels, Bernd
2008-04-15
The study presents two new nonlinear global optimization routines; the Gradient Only Tabu Search (GOTS) and the Tabu Search with Powell's Algorithm (TSPA). They are based on the Tabu-Search strategy, which tries to determine the global minimum of a function by the steepest descent-mildest ascent strategy. The new algorithms are explained and their efficiency is compared with other approaches by determining the global minima of various well-known test functions with varying dimensionality. These tests show that for most tests the GOTS possesses a much faster convergence than global optimizer taken from the literature. The efficiency of the TSPA compares to the efficiency of genetic algorithms. PMID:17910004
Albano Farias, L.; Stephany, J.
2010-12-15
We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momenta and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.
Optimization of cascade blade mistuning. II - Global optimum and numerical optimization
NASA Technical Reports Server (NTRS)
Nissim, E.; Haftka, R. T.
1985-01-01
The values of the mistuning which yield the most stable eigenvectors are analytically determined, using the simplified equations of motion which were developed in Part I of this work. It is shown that random mistunings, if large enough, may lead to the maximal stability, whereas the alternate mistunings cannot. The problem of obtaining maximum stability for minimal mistuning is formulated, based on numerical optimization techniques. Several local minima are obtained using different starting mistuning vectors. The starting vectors which lead to the global minimum are identified. It is analytically shown that all minima appear in multiplicities which are equal to the number of compressor blades. The effect of mistuning on the flutter speed is studied using both an optimum mistuning vector and an alternate mistuning vector. Effects of mistunings in elastic axis locations are shown to have a negligible effect on the eigenvalues. Finally, it is shown that any general two-dimensional bending-torsion system can be reduced to an equivalent uncoupled torsional system.
Optimal Detection of Global Warming using Temperature Profiles
NASA Technical Reports Server (NTRS)
Leroy, Stephen S.
1997-01-01
Optimal fingerprinting is applied to estimate the amount of time it would take to detect warming by increased concentrations of carbon dioxide in monthly averages of temperature profiles over the Indian Ocean.
Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.
Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)
2006-11-01
A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.
On a global aerodynamic optimization of a civil transport aircraft
NASA Technical Reports Server (NTRS)
Savu, G.; Trifu, O.
1991-01-01
An aerodynamic optimization procedure developed to minimize the drag to lift ratio of an aircraft configuration: wing - body - tail, in accordance with engineering restrictions, is described. An algorithm developed to search a hypersurface with 18 dimensions, which define an aircraft configuration, is discussed. The results, when considered from the aerodynamic point of view, indicate the optimal configuration is one that combines a lifting fuselage with a canard.
Efficient Parallel Global Optimization for High Resolution Hydrologic and Climate Impact Models
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Mueller, J.; Pang, M.
2013-12-01
High Resolution hydrologic models are typically computationally expensive, requiring many minutes or perhaps hours for one simulation. Optimization can be used with these models for parameter estimation or for analyzing management alternatives. However Optimization of these computationally expensive simulations requires algorithms that can obtain accurate answers with relatively few simulations to avoid infeasibly long computation times. We have developed a number of efficient parallel algorithms and software codes for optimization of expensive problems with multiple local minimum. This is open source software we are distributing. It runs in Matlab and Python, and has been run on Yellowstone supercomputer. The talk will quickly discuss the characteristics of the problem (e.g. the presence of integer as well as continuous variables, the number of dimensions, the availability of parallel/grid computing, the number of simulations that can be allowed to find a solution, etc. ) that determine which algorithms are most appropriate for each type of problem. A major application of this optimization software is for parameter estimation for nonlinear hydrologic models, including contaminant transport in subsurface (e.g. for groundwater remediation or multi-phase flow for carbon sequestration), nutrient transport in watersheds, and climate models. We will present results for carbon sequestration plume monitoring (multi-phase, multi-constiuent), for groundwater remediation, and for the CLM climate model. The carbon sequestration example is based on the Frio CO2 field site and the groundwater example is for a 50,000 acre remediation site (with model requiring about 1 hour per simulation). Parallel speed-ups are excellent in most cases, and our serial and parallel algorithms tend to outperform alternative methods on complex computationally expensive simulations that have multiple global minima.
NASA Astrophysics Data System (ADS)
Afshar, M. H.; Rohani, M.
2012-01-01
In this article, cellular automata based hybrid methods are proposed for the optimal design of sewer networks and their performance is compared with some of the common heuristic search methods. The problem of optimal design of sewer networks is first decomposed into two sub-optimization problems which are solved iteratively in a two stage manner. In the first stage, the pipe diameters of the network are assumed fixed and the nodal cover depths of the network are determined by solving a nonlinear sub-optimization problem. A cellular automata (CA) method is used for the solution of the optimization problem with the network nodes considered as the cells and their cover depths as the cell states. In the second stage, the nodal cover depths calculated from the first stage are fixed and the pipe diameters are calculated by solving a second nonlinear sub-optimization problem. Once again a CA method is used to solve the optimization problem of the second stage with the pipes considered as the CA cells and their corresponding diameters as the cell states. Two different updating rules are derived and used for the CA of the second stage depending on the treatment of the pipe diameters. In the continuous approach, the pipe diameters are considered as continuous variables and the corresponding updating rule is derived mathematically from the original objective function of the problem. In the discrete approach, however, an adhoc updating rule is derived and used taking into account the discrete nature of the pipe diameters. The proposed methods are used to optimally solve two sewer network problems and the results are presented and compared with those obtained by other methods. The results show that the proposed CA based hybrid methods are more efficient and effective than the most powerful search methods considered in this work.
NASA Astrophysics Data System (ADS)
Younis, Adel; Dong, Zuomin
2012-07-01
Surrogate-based modeling is an effective search method for global design optimization over well-defined areas using complex and computationally intensive analysis and simulation tools. However, indentifying the appreciate surrogate models and their suitable areas remains a challenge that requires extensive human intervention. In this work, a new global optimization algorithm, namely Mixed Surrogate and Space Elimination (MSSE) method, is introduced. Representative surrogate models, including Quadratic Response Surface, Radial Basis function, and Kriging, are mixed with different weight ratios to form an adaptive metamodel with best tested performance. The approach divides the field of interest into several unimodal regions; identifies and ranks the regions that likely contain the global minimum; fits the weighted surrogate models over each promising region using additional design experiment data points from Latin Hypercube Designs and adjusts the weights according to the performance of each model; identifies its minimum and removes the processed region; and moves to the next most promising region until all regions are processed and the global optimum is identified. The proposed algorithm was tested using several benchmark problems for global optimization and compared with several widely used space exploration global optimization algorithms, showing reduced computation efforts, robust performance and comparable search accuracy, making the proposed method an excellent tool for computationally intensive global design optimization problems.
NASA Astrophysics Data System (ADS)
Miquelez, Teresa; Bengoetxea, Endika; Mendiburu, Alexander; Larranaga, Pedro
2007-12-01
This paper introduces a evolutionary computation method that applies Bayesian classifiers to optimization problems. This approach is based on Estimation of Distribution Algorithms (EDAs) in which Bayesian or Gaussian networks are applied to the evolution of a population of individuals (i.e. potential solutions to the optimization problem) in order to improve the quality of the individuals of the next generation. Our new approach, called Evolutionary Bayesian Classifier-based Optimization Algorithm (EBCOA), employs Bayesian classifiers instead of Bayesian or Gaussian networks in order to evolve individuals to a fitter population. In brief, EBCOAs are characterized by applying Bayesian classification techniques - usually applied to supervised classification problems - to optimization in continuous domains. We propose and review in this paper different Bayesian classifiers for implementing our EBCOA method, focusing particularly on EBCOAs applying naive Bayes, semi-na¨ive Bayes, and tree augmented na¨ive Bayes classifiers. This work presents a deep study on the behavior of these algorithms with classical optimiztion problems in continuous domains. The different parameters used for tuning the performance of the algorithms are discussed, and a comprehensive overview of their influence is provided. We also present experimental results to compare this new method with other state of the art approaches of the evolutionary computation field for continuous domains such as Evolutionary Strategies (ES) and Estimation of Distribution Algorithms (EDAs).
Vrabie, Draguna; Lewis, Frank
2009-04-01
In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449
Sartelli, Massimo; Weber, Dieter G; Ruppé, Etienne; Bassetti, Matteo; Wright, Brian J; Ansaloni, Luca; Catena, Fausto; Coccolini, Federico; Abu-Zidan, Fikri M; Coimbra, Raul; Moore, Ernest E; Moore, Frederick A; Maier, Ronald V; De Waele, Jan J; Kirkpatrick, Andrew W; Griffiths, Ewen A; Eckmann, Christian; Brink, Adrian J; Mazuski, John E; May, Addison K; Sawyer, Rob G; Mertz, Dominik; Montravers, Philippe; Kumar, Anand; Roberts, Jason A; Vincent, Jean-Louis; Watkins, Richard R; Lowman, Warren; Spellberg, Brad; Abbott, Iain J; Adesunkanmi, Abdulrashid Kayode; Al-Dahir, Sara; Al-Hasan, Majdi N; Agresta, Ferdinando; Althani, Asma A; Ansari, Shamshul; Ansumana, Rashid; Augustin, Goran; Bala, Miklosh; Balogh, Zsolt J; Baraket, Oussama; Bhangu, Aneel; Beltrán, Marcelo A; Bernhard, Michael; Biffl, Walter L; Boermeester, Marja A; Brecher, Stephen M; Cherry-Bukowiec, Jill R; Buyne, Otmar R; Cainzos, Miguel A; Cairns, Kelly A; Camacho-Ortiz, Adrian; Chandy, Sujith J; Che Jusoh, Asri; Chichom-Mefire, Alain; Colijn, Caroline; Corcione, Francesco; Cui, Yunfeng; Curcio, Daniel; Delibegovic, Samir; Demetrashvili, Zaza; De Simone, Belinda; Dhingra, Sameer; Diaz, José J; Di Carlo, Isidoro; Dillip, Angel; Di Saverio, Salomone; Doyle, Michael P; Dorj, Gereltuya; Dogjani, Agron; Dupont, Hervé; Eachempati, Soumitra R; Enani, Mushira Abdulaziz; Egiev, Valery N; Elmangory, Mutasim M; Ferrada, Paula; Fitchett, Joseph R; Fraga, Gustavo P; Guessennd, Nathalie; Giamarellou, Helen; Ghnnam, Wagih; Gkiokas, George; Goldberg, Staphanie R; Gomes, Carlos Augusto; Gomi, Harumi; Guzmán-Blanco, Manuel; Haque, Mainul; Hansen, Sonja; Hecker, Andreas; Heizmann, Wolfgang R; Herzog, Torsten; Hodonou, Adrien Montcho; Hong, Suk-Kyung; Kafka-Ritsch, Reinhold; Kaplan, Lewis J; Kapoor, Garima; Karamarkovic, Aleksandar; Kees, Martin G; Kenig, Jakub; Kiguba, Ronald; Kim, Peter K; Kluger, Yoram; Khokha, Vladimir; Koike, Kaoru; Kok, Kenneth Y Y; Kong, Victory; Knox, Matthew C; Inaba, Kenji; Isik, Arda; Iskandar, Katia; Ivatury, Rao R; Labbate, Maurizio; Labricciosa, Francesco M; Laterre, Pierre-François; Latifi, Rifat; Lee, Jae Gil; Lee, Young Ran; Leone, Marc; Leppaniemi, Ari; Li, Yousheng; Liang, Stephen Y; Loho, Tonny; Maegele, Marc; Malama, Sydney; Marei, Hany E; Martin-Loeches, Ignacio; Marwah, Sanjay; Massele, Amos; McFarlane, Michael; Melo, Renato Bessa; Negoi, Ionut; Nicolau, David P; Nord, Carl Erik; Ofori-Asenso, Richard; Omari, AbdelKarim H; Ordonez, Carlos A; Ouadii, Mouaqit; Pereira Júnior, Gerson Alves; Piazza, Diego; Pupelis, Guntars; Rawson, Timothy Miles; Rems, Miran; Rizoli, Sandro; Rocha, Claudio; Sakakhushev, Boris; Sanchez-Garcia, Miguel; Sato, Norio; Segovia Lohse, Helmut A; Sganga, Gabriele; Siribumrungwong, Boonying; Shelat, Vishal G; Soreide, Kjetil; Soto, Rodolfo; Talving, Peep; Tilsed, Jonathan V; Timsit, Jean-Francois; Trueba, Gabriel; Trung, Ngo Tat; Ulrych, Jan; van Goor, Harry; Vereczkei, Andras; Vohra, Ravinder S; Wani, Imtiaz; Uhl, Waldemar; Xiao, Yonghong; Yuan, Kuo-Ching; Zachariah, Sanoop K; Zahar, Jean-Ralph; Zakrison, Tanya L; Corcione, Antonio; Melotti, Rita M; Viscoli, Claudio; Viale, Perluigi
2016-01-01
Intra-abdominal infections (IAI) are an important cause of morbidity and are frequently associated with poor prognosis, particularly in high-risk patients. The cornerstones in the management of complicated IAIs are timely effective source control with appropriate antimicrobial therapy. Empiric antimicrobial therapy is important in the management of intra-abdominal infections and must be broad enough to cover all likely organisms because inappropriate initial antimicrobial therapy is associated with poor patient outcomes and the development of bacterial resistance. The overuse of antimicrobials is widely accepted as a major driver of some emerging infections (such as C. difficile), the selection of resistant pathogens in individual patients, and for the continued development of antimicrobial resistance globally. The growing emergence of multi-drug resistant organisms and the limited development of new agents available to counteract them have caused an impending crisis with alarming implications, especially with regards to Gram-negative bacteria. An international task force from 79 different countries has joined this project by sharing a document on the rational use of antimicrobials for patients with IAIs. The project has been termed AGORA (Antimicrobials: A Global Alliance for Optimizing their Rational Use in Intra-Abdominal Infections). The authors hope that AGORA, involving many of the world's leading experts, can actively raise awareness in health workers and can improve prescribing behavior in treating IAIs. PMID:27429642
The Tunneling Method for Global Optimization in Multidimensional Scaling.
ERIC Educational Resources Information Center
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Protein structure prediction using global optimization by basin-hopping with NMR shift restraints
NASA Astrophysics Data System (ADS)
Hoffmann, Falk; Strodel, Birgit
2013-01-01
Computational methods that utilize chemical shifts to produce protein structures at atomic resolution have recently been introduced. In the current work, we exploit chemical shifts by combining the basin-hopping approach to global optimization with chemical shift restraints using a penalty function. For three peptides, we demonstrate that this approach allows us to find near-native structures from fully extended structures within 10 000 basin-hopping steps. The effect of adding chemical shift restraints is that the α and β secondary structure elements form within 1000 basin-hopping steps, after which the orientation of the secondary structure elements, which produces the tertiary contacts, is driven by the underlying protein force field. We further show that our chemical shift-restraint BH approach also works for incomplete chemical shift assignments, where the information from only one chemical shift type is considered. For the proper implementation of chemical shift restraints in the basin-hopping approach, we determined the optimal weight of the chemical shift penalty energy with respect to the CHARMM force field in conjunction with the FACTS solvation model employed in this study. In order to speed up the local energy minimization procedure, we developed a function, which continuously decreases the width of the chemical shift penalty function as the minimization progresses. We conclude that the basin-hopping approach with chemical shift restraints is a promising method for protein structure prediction.
Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites
NASA Technical Reports Server (NTRS)
Ballarini, Roberto; Ahmed, Shamim
1989-01-01
This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.
Global prediction of continuous hydrocarbon accumulations in self-sourced reservoirs
Eoff, Jennifer D.
2012-01-01
This report was first presented as an abstract in poster format at the American Association of Petroleum Geologists (AAPG) 2012 Annual Convention and Exhibition, April 22-25, Long Beach, Calif., as Search and Discovery Article no. 90142. Shale resource plays occur in predictable tectonic settings within similar orders of magnitude of eustatic events. A conceptual model for predicting the presence of resource-quality shales is essential for evaluating components of continuous petroleum systems. Basin geometry often distinguishes self-sourced resource plays from conventional plays. Intracratonic or intrashelf foreland basins at active margins are the predominant depositional settings among those explored for the development of self-sourced continuous accumulations, whereas source rocks associated with conventional accumulations typically were deposited in rifted passive margin settings (or other cratonic environments). Generally, the former are associated with the assembly of supercontinents, and the latter often resulted during or subsequent to the breakup of landmasses. Spreading rates, climate, and eustasy are influenced by these global tectonic events, such that deposition of self-sourced reservoirs occurred during periods characterized by rapid plate reconfiguration, predominantly greenhouse climate conditions, and in areas adjacent to extensive carbonate sedimentation. Combined tectonic histories, eustatic curves, and paleogeographic reconstructions may be useful in global predictions of organic-rich shale accumulations suitable for continuous resource development. Accumulation of marine organic material is attributed to upwellings that enhance productivity and oxygen-minimum bottom waters that prevent destruction of organic matter. The accumulation of potential self-sourced resources can be attributed to slow sedimentation rates in rapidly subsiding (incipient, flexural) foreland basins, while flooding of adjacent carbonate platforms and other cratonic highs
Global stability and optimal control of an SIRS epidemic model on heterogeneous networks
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Sun, Jitao
2014-09-01
In this paper, we consider an SIRS epidemic model with vaccination on heterogeneous networks. By constructing suitable Lyapunov functions, global stability of the disease-free equilibrium and the endemic equilibrium of the model is investigated. Also we firstly study an optimally controlled SIRS epidemic model on complex networks. We show that an optimal control exists for the control problem. Finally some examples are presented to show the global stability and the efficiency of this optimal control. These results can help in adopting pragmatic treatment upon diseases in structured populations.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
Optimization of focusing through scattering media using the continuous sequential algorithm
NASA Astrophysics Data System (ADS)
Thompson, J. V.; Hokr, B. H.; Yakovlev, V. V.
2016-01-01
The ability to control the propagation of light through scattering media is essential for atmospheric optics, astronomy, biomedical imaging, and remote sensing. The optimization of focusing light through a scattering medium is of particular interest for the case of highly scattering materials. Optical wavefront beam-shaping plays a critical role in optimizing such a propagation; however, an enormous field of adjustable parameters makes the overall task complicated. Here, we propose and experimentally evaluate several variations on the standard continuous sequential algorithm (CSA) that hold a promise of revealing new, faster, and more efficient optimization algorithms for selecting an optical wavefront to focus light through a scattering medium. We demonstrate that the order in which pixels are chosen in the CSA can lead to a two-fold decrease in the number of iterations required to reach a given enhancement.
NASA Astrophysics Data System (ADS)
Hu, Li-Yun; Liao, Zeyang; Ma, Shengli; Zubairy, M. Suhail
2016-03-01
We introduce three tunable parameters to optimize the fidelity of quantum teleportation with continuous variables in a nonideal scheme. By using the characteristic-function formalism, we present the condition that the teleportation fidelity is independent of the amplitude of input coherent states for any entangled resource. Then we investigate the effects of tunable parameters on the fidelity with or without the presence of the environment and imperfect measurements by analytically deriving the expression of fidelity for three different input coherent-state distributions. It is shown that, for the linear distribution, the optimization with three tunable parameters is the best one with respect to single- and two-parameter optimization. Our results reveal the usefulness of tunable parameters for improving the fidelity of teleportation and the ability against decoherence.
A reconciliation of local and global models for bone remodeling through optimization theory.
Subbarayan, G; Bartel, D L
2000-02-01
Remodeling rules with either a global or a local mathematical form have been proposed for load-bearing bones in the literature. In the local models, the bone architecture (shape, density) is related to the strains/energies sensed at any point in the bone, while in the global models, a criterion believed to be applicable to the whole bone is used. In the present paper, a local remodeling rule with a strain "error" form is derived as the necessary condition for the optimum of a global remodeling criterion, suggesting that many of the local error-driven remodeling rules may have corresponding global optimization-based criteria. The global criterion proposed in the present study is a trade-off between the cost of metabolic growth and use, mathematically represented by the mass, and the cost of failure, mathematically represented by the total strain energy. The proposed global criterion is shown to be related to the optimality criteria methods of structural optimization by the equivalence of the model solution and the fully stressed solution for statically determinate structures. In related work, the global criterion is applied to simulate the strength recovery in bones with screw holes left behind after removal of fracture fixation plates. The results predicted by the model are shown to be in good agreement with experimental results, leading to the conclusion that load-bearing bones are structures with optimal shape and property for their function. PMID:10790832
NASA Astrophysics Data System (ADS)
Ghorbani, Mehrdad; Assadian, Nima
2013-12-01
In this study the gravitational perturbations of the Sun and other planets are modeled on the dynamics near the Earth-Moon Lagrange points and optimal continuous and discrete station-keeping maneuvers are found to maintain spacecraft about these points. The most critical perturbation effect near the L1 and L2 Lagrange points of the Earth-Moon is the ellipticity of the Moon's orbit and the Sun's gravity, respectively. These perturbations deviate the spacecraft from its nominal orbit and have been modeled through a restricted five-body problem (R5BP) formulation compatible with circular restricted three-body problem (CR3BP). The continuous control or impulsive maneuvers can compensate the deviation and keep the spacecraft on the closed orbit about the Lagrange point. The continuous control has been computed using linear quadratic regulator (LQR) and is compared with nonlinear programming (NP). The multiple shooting (MS) has been used for the computation of impulsive maneuvers to keep the trajectory closed and subsequently an optimized MS (OMS) method and multiple impulses optimization (MIO) method have been introduced, which minimize the summation of multiple impulses. In these two methods the spacecraft is allowed to deviate from the nominal orbit; however, the spacecraft trajectory should close itself. In this manner, some closed or nearly closed trajectories around the Earth-Moon Lagrange points are found that need almost zero station-keeping maneuver.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness. PMID:26415188
A Lyapunov-Based Extension to Particle Swarm Dynamics for Continuous Function Optimization
Bhattacharya, Sayantani; Konar, Amit; Das, Swagatam; Han, Sang Yong
2009-01-01
The paper proposes three alternative extensions to the classical global-best particle swarm optimization dynamics, and compares their relative performance with the standard particle swarm algorithm. The first extension, which readily follows from the well-known Lyapunov's stability theorem, provides a mathematical basis of the particle dynamics with a guaranteed convergence at an optimum. The inclusion of local and global attractors to this dynamics leads to faster convergence speed and better accuracy than the classical one. The second extension augments the velocity adaptation equation by a negative randomly weighted positional term of individual particle, while the third extension considers the negative positional term in place of the inertial term. Computer simulations further reveal that the last two extensions outperform both the classical and the first extension in terms of convergence speed and accuracy. PMID:22303158
Wagner, Caroline S; Park, Han Woo; Leydesdorff, Loet
2015-01-01
Global collaboration continues to grow as a share of all scientific cooperation, measured as coauthorships of peer-reviewed, published papers. The percent of all scientific papers that are internationally coauthored has more than doubled in 20 years, and they account for all the growth in output among the scientifically advanced countries. Emerging countries, particularly China, have increased their participation in global science, in part by doubling their spending on R&D; they are increasingly likely to appear as partners on internationally coauthored scientific papers. Given the growth of connections at the international level, it is helpful to examine the phenomenon as a communications network and to consider the network as a new organization on the world stage that adds to and complements national systems. When examined as interconnections across the globe over two decades, a global network has grown denser but not more clustered, meaning there are many more connections but they are not grouping into exclusive 'cliques'. This suggests that power relationships are not reproducing those of the political system. The network has features an open system, attracting productive scientists to participate in international projects. National governments could gain efficiencies and influence by developing policies and strategies designed to maximize network benefits-a model different from those designed for national systems. PMID:26196296
Wagner, Caroline S.; Park, Han Woo; Leydesdorff, Loet
2015-01-01
Global collaboration continues to grow as a share of all scientific cooperation, measured as coauthorships of peer-reviewed, published papers. The percent of all scientific papers that are internationally coauthored has more than doubled in 20 years, and they account for all the growth in output among the scientifically advanced countries. Emerging countries, particularly China, have increased their participation in global science, in part by doubling their spending on R&D; they are increasingly likely to appear as partners on internationally coauthored scientific papers. Given the growth of connections at the international level, it is helpful to examine the phenomenon as a communications network and to consider the network as a new organization on the world stage that adds to and complements national systems. When examined as interconnections across the globe over two decades, a global network has grown denser but not more clustered, meaning there are many more connections but they are not grouping into exclusive ‘cliques’. This suggests that power relationships are not reproducing those of the political system. The network has features an open system, attracting productive scientists to participate in international projects. National governments could gain efficiencies and influence by developing policies and strategies designed to maximize network benefits—a model different from those designed for national systems. PMID:26196296
Optimal Compensation with Hidden Action and Lump-Sum Payment in a Continuous-Time Model
Cvitanic, Jaksa Wan, Xuhu Zhang Jianfeng
2009-02-15
We consider a problem of finding optimal contracts in continuous time, when the agent's actions are unobservable by the principal, who pays the agent with a one-time payoff at the end of the contract. We fully solve the case of quadratic cost and separable utility, for general utility functions. The optimal contract is, in general, a nonlinear function of the final outcome only, while in the previously solved cases, for exponential and linear utility functions, the optimal contract is linear in the final output value. In a specific example we compute, the first-best principal's utility is infinite, while it becomes finite with hidden action, which is increasing in value of the output. In the second part of the paper we formulate a general mathematical theory for the problem. We apply the stochastic maximum principle to give necessary conditions for optimal contracts. Sufficient conditions are hard to establish, but we suggest a way to check sufficiency using non-convex optimization.
Optimal Estimates of Global Terrestrial GPP from Fluorescence and DGVMs
NASA Astrophysics Data System (ADS)
Parazoo, Nicholas; Bowman, Kevin; Fisher, Joshua; Frankenberg, Christian; Jones, Dylan; Cescatti, Alessandro; Perez-Priego, Oscar; Wohlfahrt, Georg; Montagnani, Leonardo
2014-05-01
Changes in the processes that control terrestrial carbon uptake are highly uncertain but likely to have a significant influence on future atmospheric CO2 levels. RECCAP aims to improve process understanding by reconciling fluxes from top-down CO2 inversions and bottom-up estimates from an ensemble of DGVMs. As these models are typically used in projections of climate change a key part of this effort is benchmarking models and evaluating drivers of net carbon exchange within the current climate. Of particular importance are the spatial distribution and time rate of change of GPP. Recent advances in the remote sensing of solar-induced chlorophyll fluorescence opens up a new possibility to directly measure planetary photosynthesis on spatially resolved scales. Here, we discuss a new methodology for estimating GPP and uncertainty from an optimal combination of an ensemble of DGVMs from the TRENDY project with satellite-based fluorescence observations from GOSAT. Prior uncertainty is estimated from the spread of DGVMs and updated through assimilation of fluorescence. We evaluate optimized fluxes against flux tower data in N. America, Europe, and S. America, benchmark TRENDY models using updated uncertainty estimates, and examine changes in the structure of the seasonal cycle. We find this methodology provides a novel way to evaluate models used in climate projections.
NASA Astrophysics Data System (ADS)
Vaziri Yazdi Pin, Mohammad
Electric power distribution systems are the last high voltage link in the chain of production, transport, and delivery of the electric energy, the fundamental goals of which are to supply the users' demand safely, reliably, and economically. The number circuit miles traversed by distribution feeders in the form of visible overhead or imbedded underground lines, far exceed those of all other bulk transport circuitry in the transmission system. Development and expansion of the distribution systems, similar to other systems, is directly proportional to the growth in demand and requires careful planning. While growth of electric demand has recently slowed through efforts in the area of energy management, the need for a continued expansion seems inevitable for the near future. Distribution system and expansions are also independent of current issues facing both the suppliers and the consumers of electrical energy. For example, deregulation, as an attempt to promote competition by giving more choices to the consumers, while it will impact the suppliers' planning strategies, it cannot limit the demand growth or the system expansion in the global sense. Curiously, despite presence of technological advancements and a 40-year history of contributions in the area, many of the major utilities still relay on experience and resort to rudimentary techniques when planning expansions. A comprehensive literature review of the contributions and careful analyses of the proposed algorithms for distribution expansion, confirmed that the problem is a complex, multistage and multiobjective problem for which a practical solution remains to be developed. In this research, based on the 15-year experience of a utility engineer, the practical expansion problem has been clearly defined and the existing deficiencies in the previous work identified and analyzed. The expansion problem has been formulated as a multistage planning problem in line with a natural course of development and industry
A Global Optimization Methodology for Rocket Propulsion Applications
NASA Technical Reports Server (NTRS)
2001-01-01
While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.
NASA Astrophysics Data System (ADS)
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
NASA Technical Reports Server (NTRS)
Childs, A. G.
1971-01-01
A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.
A policy iteration approach to online optimal control of continuous-time constrained-input systems.
Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L
2013-09-01
This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. PMID:23706414
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
Method for using global optimization to the estimation of surface-consistent residual statics
Reister, David B.; Barhen, Jacob; Oblow, Edward M.
2001-01-01
An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.
Liu, Haorui; Yi, Fengyan; Yang, Heli
2016-01-01
The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584
Experimental comparison of six population-based algorithms for continuous black box optimization.
Pošík, Petr; Kubalík, Jiří
2012-01-01
Six population-based methods for real-valued black box optimization are thoroughly compared in this article. One of them, Nelder-Mead simplex search, is rather old, but still a popular technique of direct search. The remaining five (POEMS, G3PCX, Cauchy EDA, BIPOP-CMA-ES, and CMA-ES) are more recent and came from the evolutionary computation community. The recently proposed comparing continuous optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that BIPOP-CMA-ES reaches the highest success rates and is often also quite fast. The results of the remaining algorithms are mixed, but Cauchy EDA and POEMS are usually slow. PMID:22708972
Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.
López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio
2016-01-01
Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550
Hurtado, F J; Kaiser, A S; Zamora, B
2015-03-15
Continuous stirred tank reactors (CSTR) are widely used in wastewater treatment plants to reduce the organic matter and microorganism present in sludge by anaerobic digestion. The present study carries out a numerical analysis of the fluid dynamic behaviour of a CSTR in order to optimize the process energetically. The characterization of the sludge flow inside the digester tank, the residence time distribution and the active volume of the reactor under different criteria are determined. The effects of design and power of the mixing system on the active volume of the CSTR are analyzed. The numerical model is solved under non-steady conditions by examining the evolution of the flow during the stop and restart of the mixing system. An intermittent regime of the mixing system, which kept the active volume between 94% and 99%, is achieved. The results obtained can lead to the eventual energy optimization of the mixing system of the CSTR. PMID:25635665
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Singh, A.
2008-12-01
This paper will describe some new optimization algorithms and their application to hydrologic models. The approaches include a parallel version of a new heuristic algorithm combined with tabu search and a mathematically derived global optimization method that is based on trust region methods. The goals of these methods are to find optimal solutions to calibration problems and to design problems with relatively few simulations or (in a parallel environment) relatively little wallclock time. This is important because currently it is not possible to apply global optimization methods like genetic algorithms to computationally expensive simulation models like partial differential equations (with many nodes in groundwater) because it is not feasible to do thousands of simulations to evaluate the objective/fitness function. Results of the application of the algorithms to some complex models of groundwater contamination and phosphorous transport in watersheds will be presented.
Optimization of global model composed of radial basis functions using the term-ranking approach
Cai, Peng; Tao, Chao Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization
NASA Astrophysics Data System (ADS)
Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie
2016-04-01
Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.
NASA Astrophysics Data System (ADS)
Ulybyshev, S. Yu.
2016-07-01
We present a method for designing nonuniform satellite systems for continuous global coverage using a combination of equatorial and near-polar satellite segments in circular orbits. Equations are derived to determine the basic design parameters of the satellite system itself and the conditions of its closure at the joint of near-polar and equatorial segments. We analyze specific features of near-polar and equatorial satellite systems and their advantages and disadvantages compared with existing classes of near-polar phased and kinematically correct satellite systems. We estimate the minimum required number of spacecrafts in satellite systems for a given fold of coverage and present calculated dependences for classes of near-polar phased and equatorial satellite systems with different types of closure. For the class of kinematically correct satellite systems, we analyze the characteristics of systems with a minimum spacecraft flight height and reveal that the number of satellites in the orbital plane depends on the flight height for different folds of coverage. We bring examples of the best near-polar equatorial satellite systems of global coverage for different folds and a class of satellite systems with a fixed number of spacecrafts and orbital planes in them.
NASA Astrophysics Data System (ADS)
Mastellone, Daniela; Fedi, Maurizio; Ialongo, Simone; Paoletti, Valeria
2014-12-01
Many methods have been used to upward continue potential field data. Most techniques employ the Fast Fourier transform, which is an accurate, quick way to compute level-to-level upward continuation or spatially varying scale filters for level-to-draped surfaces. We here propose a new continuation approach based on the minimum-length solution of the inverse potential field problem, which we call Volume Continuation (VOCO). For real data the VOCO is obtained as the regularized solution to the Tikhonov problem. We tested our method on several synthetic examples involving all types of upward continuation and downward continuation (level-to-level, level-to-draped, draped-to-level, draped-to-draped). We also employed the technique to upward continue to a constant height (2500 m a.s.l.), the high-resolution draped aeromagnetic data of the Ischia Island in Southern Italy. We found that, on the average, they are consistent with the aeromagnetic regional data measured at the same altitude. The main feature of our method is that it does not only provide continued data over a specified surface, but it yields a volume of upward continuation. For example, the continued data refers to a volume and thus, any surface may be easily picked up within the volume to get upward continuation to different surfaces. This approach, based on inversion of the measured data, tends to be especially advantageous over the classical techniques when dealing with draped-to-level upward continuation. It is also useful to obtain a more stable downward continuation and to continue noisy data. The inversion procedure involved in the method implies moderate computational costs, which are well compensated by getting a 3D set of upward continued data to achieve high quality results.
MEC--a near-optimal online reinforcement learning algorithm for continuous deterministic systems.
Zhao, Dongbin; Zhu, Yuanheng
2015-02-01
In this paper, the first probably approximately correct (PAC) algorithm for continuous deterministic systems without relying on any system dynamics is proposed. It combines the state aggregation technique and the efficient exploration principle, and makes high utilization of online observed samples. We use a grid to partition the continuous state space into different cells to save samples. A near-upper Q operator is defined to produce a near-upper Q function using samples in each cell. The corresponding greedy policy effectively balances between exploration and exploitation. With the rigorous analysis, we prove that there is a polynomial time bound of executing nonoptimal actions in our algorithm. After finite steps, the final policy reaches near optimal in the framework of PAC. The implementation requires no knowledge of systems and has less computation complexity. Simulation studies confirm that it is a better performance than other similar PAC algorithms. PMID:25474812
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
An adaptive metamodel-based global optimization algorithm for black-box type problems
NASA Astrophysics Data System (ADS)
Jie, Haoxiang; Wu, Yizhong; Ding, Jianwan
2015-11-01
In this article, an adaptive metamodel-based global optimization (AMGO) algorithm is presented to solve unconstrained black-box problems. In the AMGO algorithm, a type of hybrid model composed of kriging and augmented radial basis function (RBF) is used as the surrogate model. The weight factors of hybrid model are adaptively selected in the optimization process. To balance the local and global search, a sub-optimization problem is constructed during each iteration to determine the new iterative points. As numerical experiments, six standard two-dimensional test functions are selected to show the distributions of iterative points. The AMGO algorithm is also tested on seven well-known benchmark optimization problems and contrasted with three representative metamodel-based optimization methods: efficient global optimization (EGO), GutmannRBF and hybrid and adaptive metamodel (HAM). The test results demonstrate the efficiency and robustness of the proposed method. The AMGO algorithm is finally applied to the structural design of the import and export chamber of a cycloid gear pump, achieving satisfactory results.
Global search and optimization for free-return Earth-Mars cyclers
NASA Astrophysics Data System (ADS)
Russell, Ryan Paul
A planetary cycler trajectory is a periodic orbit that shuttles a spaceship indefinitely between two or more planets, ideally using no powered maneuvers. Recently, the cycler concept has been revived as an alternative to the more traditional human-crewed Mars missions. This dissertation investigates a class of idealized Earth-Mars cyclers that are composed of Earth to Earth free-returns trajectories patched together with gravity-assisted flybys. A systematic method is presented to identify all feasible free-return trajectories following an arbitrary gravity-assisted flyby. The multiple-revolution Lambert's Problem is solved in the context of half-rev, full-rev, and generic returns. The solutions are expressed geometrically, and the resulting velocity diagram is a mission-planning tool with applications including but not limited to Earth-Mars cyclers. Two different global search methods are then developed and applied, taking advantage of all three types of free-return solutions. The first method results in twenty-four ballistic cyclers with periods of two to four synodic periods, ninety-two ballistic cyclers with periods of five or six synodic periods, and hundreds of near-ballistic cyclers. Most of the solutions are previously undocumented. The second and more generalized method only searches for the more practical cyclers with repeat times of three-synodic periods or less. This global approach uses combinatorial analysis and minimax optimization to identify 203 promising ballistic or near-ballistic mostly new cyclers. Finally, the feasibility of accurate ephemeris versions of the promising idealized cyclers is demonstrated. An efficient optimization method that utilizes analytic gradients is developed for long duration, ballistic, patched-conic trajectories with multiple flybys. The approach is applied at every step of a continuation method that transitions the simple model solutions to accurate ephemeris solutions. Hundreds of ballistic launch opportunities for
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:24136425
SfM with MRFs: Discrete-Continuous Optimization for Large-Scale Structure from Motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2012-10-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:23045369
Optimizing the Concentration and Bolus of a Drug Delivered by Continuous Infusion
Thall, Peter F.; Szabo, Aniko; Nguyen, Hoang Q.; Amlie-Lefond, Catherine M.; Zaidat, Osama O.
2011-01-01
Summary We consider treatment regimes in which an agent is administered continuously at a specified concentration until either a response is achieved or a predetermined maximum infusion time is reached. Response is an event defined to characterize therapeutic efficacy. A portion of the maximum planned total amount administered is given as an initial bolus. For such regimes, the amount of the agent received by the patient depends on the time to response. An additional complication when response is evaluated periodically rather than continuously is that the response time is interval censored. We address the problem of designing a clinical trial in which such response time data and a binary indicator of toxicity are used together to jointly optimize the concentration and the size of the bolus. We propose a sequentially adaptive Bayesian design that chooses the optimal treatment for successive patients by maximizing the posterior mean utility of the joint efficacy-toxicity outcome. The methodology is illustrated by a trial in which tissue plasminogen activator is infused intra-arterially as rapid treatment for acute ischemic stroke. PMID:21401568
NASA Astrophysics Data System (ADS)
Li, Yuan; Gosálvez, Miguel A.; Pal, Prem; Sato, Kazuo; Xing, Yan
2015-05-01
We combine the particle swarm optimization (PSO) method and the continuous cellular automaton (CCA) in order to simulate deep reactive ion etching (DRIE), also known as the Bosch process. By considering a generic growth/etch process, the proposed PSO-CCA method provides a general, integrated procedure to optimize the parameter values of any given theoretical model conceived to describe the corresponding experiments, which are simulated by the CCA method. To stress the flexibility of the PSO-CCA method, two different theoretical models of the DRIE process are used, namely, the ballistic transport and reaction (BTR) model, and the reactant concentration (RC) model. DRIE experiments are designed and conducted to compare the simulation results with the experiments on different machines and process conditions. Previously reported experimental data are also considered to further test the flexibility of the proposed method. The agreement between the simulations and experiments strongly indicates that the PSO-CCA method can be used to adjust the theoretical parameters by using a limited amount of experimental data. The proposed method has the potential to be applied on the modeling and optimization of other growth/etch processes.
NASA Astrophysics Data System (ADS)
Paul, Bryan
Waveform design that allows for a wide variety of frequency-modulation (FM) has proven benefits. However, dictionary based optimization is limited and gradient search methods are often intractable. A new method is proposed using differential evolution to design waveforms with instantaneous frequencies (IFs) with cubic FM functions whose coefficients are constrained to the surface of the three dimensional unit sphere. Cubic IF functions subsume well-known IF functions such as linear, quadratic monomial, and cubic monomial IF functions. In addition, all nonlinear IF functions sufficiently approximated by a third order Taylor series over the unit time sequence can be represented in this space. Analog methods for generating polynomial IF waveforms are well established allowing for practical implementation in real world systems. By sufficiently constraining the search space to these waveforms of interest, alternative optimization methods such as differential evolution can be used to optimize tracking performance in a variety of radar environments. While simplified tracking models and finite waveform dictionaries have information theoretic results, continuous waveform design in high SNR, narrowband, cluttered environments is explored.
Optimization of partial nitritation in a continuous flow internal loop airlift reactor.
Jin, Ren-Cun; Xing, Bao-Shan; Ni, Wei-Min
2013-11-01
In the present study, the performance of the partial nitritation (PN) process in a continuous flow internal loop airlift reactor was optimized by applying the response surface method (RSM). The purpose of this work was to find the optimal combination of influent ammonium (NH4(+)-Ninf), dissolved oxygen (DO) and the alkalinity/ammonium ratio (Alk/NH4(+)-N) with respect to the effluent nitrite to ammonium molar ratio and nitrite accumulation ratio. Based on the RSM results, the reduced cubic model and the quadratic model developed for the responses indicated that the optimal conditions were a DO content of 1.1-2.1 mg L(-1), an Alk/NH4(+)-N ratio of 3.30-5.69 and an NH4(+)-Ninf content of 608-1039 mg L(-1). The results of confirmation trials were close to the predictions of the developed models. Furthermore, three types of alkali were comparatively explored for use in the PN process, and bicarbonate was found to be the best alkalinity source. PMID:24012847
Optimization Strategies for Single-Stage, Multi-Stage and Continuous ADRs
NASA Technical Reports Server (NTRS)
Shirron, Peter J.
2014-01-01
Adiabatic Demagnetization Refrigerators (ADR) have many advantages that are prompting a resurgence in their use in spaceflight and laboratory applications. They are solid-state coolers capable of very high efficiency and very wide operating range. However, their low energy storage density translates to larger mass for a given cooling capacity than is possible with other refrigeration techniques. The interplay between refrigerant mass and other parameters such as magnetic field and heat transfer points in multi-stage ADRs gives rise to a wide parameter space for optimization. This paper first presents optimization strategies for single ADR stages, focusing primarily on obtaining the largest cooling capacity per stage mass, then discusses the optimization of multi-stage and continuous ADRs in the context of the coordinated heat transfer that must occur between stages. The goal for the latter is usually to obtain the largest cooling power per mass or volume, but there can also be many secondary objectives, such as limiting instantaneous heat rejection rates and producing intermediate temperatures for cooling of other instrument components.
Mostafaei, M; Ghobadian, B; Barzegar, M; Banakar, A
2015-11-01
This paper evaluates and optimizes the continuous production of biodiesel from waste cooking oil. In this research work, methanol and potassium hydroxide were used as catalyst engaging response surface methodology. For this purpose, the central composite experimental design (CCED), the effects of various factors such as irradiation distance, probe diameter, ultrasonic amplitude, vibration pulse and material flow into the reactor on reaction yield were studied to optimize the process. The results showed that all of the considered parameters affect the reaction efficiency significantly. The optimum combination of the findings include: irradiation distance which was 75 mm, probe diameter of 28 mm, ultrasonic amplitude of 56%, vibration pulse of 62% and flow rate of 50 ml/min that caused the reaction yield of 91.6% and energy consumption of 102.8 W. To verify this optimized combination, three tests were carried out. The results showed an average efficiency of 91.12% and 102.4 W power consumption which is well matched with the model's predictions. PMID:26186820
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. PMID:24929345
Climate, Agriculture, Energy and the Optimal Allocation of Global Land Use
NASA Astrophysics Data System (ADS)
Steinbuks, J.; Hertel, T. W.
2011-12-01
The allocation of the world's land resources over the course of the next century has become a pressing research question. Continuing population increases, improving, land-intensive diets amongst the poorest populations in the world, increasing production of biofuels and rapid urbanization in developing countries are all competing for land even as the world looks to land resources to supply more environmental services. The latter include biodiversity and natural lands, as well as forests and grasslands devoted to carbon sequestration. And all of this is taking place in the context of faster than expected climate change which is altering the biophysical environment for land-related activities. The goal of the paper is to determine the optimal profile for global land use in the context of growing commercial demands for food and forest products, increasing non-market demands for ecosystem services, and more stringent GHG mitigation targets. We then seek to assess how the uncertainty associated with the underlying biophysical and economic processes influences this optimal profile of land use, in light of potential irreversibility in these decisions. We develop a dynamic long-run, forward-looking partial equilibrium framework in which the societal objective function being maximized places value on food production, liquid fuels (including biofuels), timber production, forest carbon and biodiversity. Given the importance of land-based emissions to any GHG mitigation strategy, as well as the potential impacts of climate change itself on the productivity of land in agriculture, forestry and ecosystem services, we aim to identify the optimal allocation of the world's land resources, over the course of the next century, in the face of alternative GHG constraints. The forestry sector is characterized by multiple forest vintages which add considerable computational complexity in the context of this dynamic analysis. In order to solve this model efficiently, we have employed the
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2014-04-01
Much effort is devoted nowadays to derive accurate finite element (FE) models to be used for structural health monitoring, damage detection and assessment. However, formation of a FE model representative of the original structure is a difficult task. Model updating is a branch of optimization which calibrates the FE model by comparing the modal properties of the actual structure with these of the FE predictions. As the number of experimental measurements is usually much smaller than the number of uncertain parameters, and, consequently, not all uncertain parameters are selected for model updating, different local minima may exist in the solution space. Experimental noise further exacerbates the problem. The attainment of a global solution in a multi-dimensional search space is a challenging problem. Global optimization algorithms (GOAs) have received interest in the previous decade to solve this problem, but no GOA can ensure the detection of the global minimum either. To counter this problem, a combination of GOA with sequential niche technique (SNT) has been proposed in this research which systematically searches the whole solution space. A dynamically tested full scale pedestrian bridge is taken as a case study. Two different GOAs, namely particle swarm optimization (PSO) and genetic algorithm (GA), are investigated in combination with SNT. The results of these GOA are compared in terms of their efficiency in detecting global minima. The systematic search enables to find different solutions in the search space, thus increasing the confidence of finding the global minimum.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
On computing the global time-optimal motions of robotic manipulators in the presence of obstacles
NASA Technical Reports Server (NTRS)
Shiller, Zvi; Dubowsky, Steven
1991-01-01
A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.
PollyNET: a global network of automated Raman-polarization lidars for continuous aerosol profiling
NASA Astrophysics Data System (ADS)
Baars, H.; Kanitz, T.; Engelmann, R.; Althausen, D.; Heese, B.; Komppula, M.; Preißler, J.; Tesche, M.; Ansmann, A.; Wandinger, U.; Lim, J.-H.; Ahn, J. Y.; Stachlewska, I. S.; Amiridis, V.; Marinou, E.; Seifert, P.; Hofer, J.; Skupin, A.; Schneider, F.; Bohlmann, S.; Foth, A.; Bley, S.; Pfüller, A.; Giannakaki, E.; Lihavainen, H.; Viisanen, Y.; Hooda, R. K.; Pereira, S.; Bortoli, D.; Wagner, F.; Mattis, I.; Janicka, L.; Markowicz, K. M.; Achtert, P.; Artaxo, P.; Pauliquevis, T.; Souza, R. A. F.; Sharma, V. P.; van Zyl, P. G.; Beukes, J. P.; Sun, J. Y.; Rohwer, E. G.; Deng, R.; Mamouri, R. E.; Zamorano, F.
2015-10-01
A global vertically resolved aerosol data set covering more than 10 years of observations at more than 20 measurement sites distributed from 63° N to 52° S and 72° W to 124° E has been achieved within the Raman and polarization lidar network PollyNET. This network consists of portable, remote-controlled multiwavelength-polarization-Raman lidars (Polly) for automated and continuous 24/7 observations of clouds and aerosols. PollyNET is an independent, voluntary, and scientific network. All Polly lidars feature a standardized instrument design and apply unified calibration, quality control, and data analysis. The observations are processed in near-real time without manual intervention, and are presented online at http://polly.tropos.de. The paper gives an overview of the observations on four continents and two research vessels obtained with eight Polly systems. The specific aerosol types at these locations (mineral dust, smoke, dust-smoke and other dusty mixtures, urban haze, and volcanic ash) are identified by their Ångström exponent, lidar ratio, and depolarization ratio. The vertical aerosol distribution at the PollyNET locations is discussed on the basis of more than 55 000 automatically retrieved 30 min particle backscatter coefficient profiles at 532 nm. A seasonal analysis of measurements at selected sites revealed typical and extraordinary aerosol conditions as well as seasonal differences. These studies show the potential of PollyNET to support the establishment of a global aerosol climatology that covers the entire troposphere.
Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis
Ambur, D.R.; Jaunky, N.; Knight, N.F. Jr.
1996-04-01
A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.
Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.
1996-01-01
A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.
Isolated particle swarm optimization with particle migration and global best adoption
NASA Astrophysics Data System (ADS)
Tsai, Hsing-Chih; Tyan, Yaw-Yauan; Wu, Yun-Wu; Lin, Yong-Huang
2012-12-01
Isolated particle swarm optimization (IPSO) segregates particles into several sub-swarms in order to improve the ability of the global optimization. In this study, particle migration and global best adoption (gbest adoption) are used to improve IPSO. Particle migration allows particles to travel among sub-swarms, based on the fitness of the sub-swarms. The use of gbest adoption allows sub-swarms to peep at the gbest proportionally or probably after a certain number of iterations, i.e. gbest replacing, and gbest sharing, respectively. Three well-known benchmark functions are utilized to determine the parameter settings of the IPSO. Then, 13 benchmark functions are used to study the performance of the designed IPSO. Computational experience demonstrates that the designed IPSO is superior to the original version of particle swarm optimization (PSO) in terms of the accuracy and stability of the results, when isolation phenomenon, particle migration and gbest sharing are involved.
Namiki, Ryo; Koashi, Masato; Imoto, Nobuyuki
2006-03-15
We investigate the security of continuous-variable quantum key distribution using coherent states and reverse reconciliation against Gaussian individual attacks based on an optimal Gaussian 1{yields}2 cloning machine. We provide an implementation of the optimal Gaussian individual attack. We also find a Bell-measurement attack which works without delayed choice of measurements and has better performance than the cloning attack.
Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.
He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang
2016-01-01
Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785
Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization
He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang
2016-01-01
Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785
A coupled global-local shell model with continuous interlaminar shear stresses
NASA Astrophysics Data System (ADS)
Gruttmann, F.; Wagner, W.; Knust, G.
2016-02-01
In this paper layered composite shells subjected to static loading are considered. The theory is based on a multi-field functional, where the associated Euler-Lagrange equations include besides the global shell equations formulated in stress resultants, the local in-plane equilibrium in terms of stresses and a constraint which enforces the correct shape of warping through the thickness. Within a four-node element the warping displacements are interpolated with layerwise cubic functions in thickness direction and constant shape throughout the element reference surface. Elimination of stress, warping and Lagrange parameters on element level leads to a mixed hybrid shell element with 5 or 6 nodal degrees of freedom. The implementation in a finite element program is simple. The computed interlaminar shear stresses are automatically continuous at the layer boundaries. Also the stress boundary conditions at the outer surfaces are fulfilled and the integrals of the shear stresses coincide exactly with the independently interpolated shear forces without introduction of further constraints. The essential feature of the element formulation is the fact that it leads to usual shell degrees of freedom, which allows application of standard boundary or symmetry conditions and computation of shell structures with intersections.
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
Vercruysse, J; Peeters, E; Fonteyne, M; Cappuyns, P; Delaet, U; Van Assche, I; De Beer, T; Remon, J P; Vervaet, C
2015-01-01
Since small scale is key for successful introduction of continuous techniques in the pharmaceutical industry to allow its use during formulation development and process optimization, it is essential to determine whether the product quality is similar when small quantities of materials are processed compared to the continuous processing of larger quantities. Therefore, the aim of this study was to investigate whether material processed in a single cell of the six-segmented fluid bed dryer of the ConsiGma™-25 system (a continuous twin screw granulation and drying system introduced by GEA Pharma Systems, Collette™, Wommelgem, Belgium) is predictive of granule and tablet quality during full-scale manufacturing when all drying cells are filled. Furthermore, the performance of the ConsiGma™-1 system (a mobile laboratory unit) was evaluated and compared to the ConsiGma™-25 system. A premix of two active ingredients, powdered cellulose, maize starch, pregelatinized starch and sodium starch glycolate was granulated with distilled water. After drying and milling (1000 μm, 800 rpm), granules were blended with magnesium stearate and compressed using a Modul™ P tablet press (tablet weight: 430 mg, main compression force: 12 kN). Single cell experiments using the ConsiGma™-25 system and ConsiGma™-1 system were performed in triplicate. Additionally, a 1h continuous run using the ConsiGma™-25 system was executed. Process outcomes (torque, barrel wall temperature, product temperature during drying) and granule (residual moisture content, particle size distribution, bulk and tapped density, hausner ratio, friability) as well as tablet (hardness, friability, disintegration time and dissolution) quality attributes were evaluated. By performing a 1h continuous run, it was detected that a stabilization period was needed for torque and barrel wall temperature due to initial layering of the screws and the screw chamber walls with material. Consequently, slightly deviating
Optimal dose-finding designs with correlated continuous and discrete responses.
Fedorov, Valerii; Wu, Yuehui; Zhang, Rongmei
2012-02-10
In dose-finding clinical studies, it is common that multiple endpoints are of interest. For instance, in phase I/II studies, efficacy and toxicity are often the primary endpoints, which are observed simultaneously and which need to be evaluated together. Motivated by this, we confine ourselves to bivariate responses and focus on the most analytically difficult case: a mixture of continuous and categorical responses. We adopt the bivariate probit dose-response model and quantify our goal by a utility function. We study locally optimal designs, two-stage optimal designs, and fully adaptive designs under different ethical and cost constraints in the experiments. We assess the performance of two-stage designs and fully adaptive designs via simulations. Our simulations suggest that the two-stage designs are as efficient as and may be more efficient than the fully adaptive designs if there is a moderate sample size in the initial stage. In addition, two-stage designs are easier to construct and implement and thus can be a useful approach in practice. PMID:22162014
A comparative study of expected improvement-assisted global optimization with different surrogates
NASA Astrophysics Data System (ADS)
Wang, Hu; Ye, Fan; Li, Enying; Li, Guangyao
2016-08-01
Efficient global optimization (EGO) uses the surrogate uncertainty estimator called expected improvement (EI) to guide the selection of the next sampling candidates. Theoretically, any modelling methods can be integrated with the EI criterion. To improve the convergence ratio, a multi-surrogate efficient global optimization (MSEGO) was suggested. In practice, the EI-based optimization methods with different surrogates show widely divergent characteristics. Therefore, it is important to choose the most suitable algorithm for a certain problem. For this purpose, four single-surrogate efficient global optimizations (SSEGOs) and an MSEGO involving four surrogates are investigated. According to numerical tests, both the SSEGOs and the MSEGO are feasible for weak nonlinear problems. However, they are not robust for strong nonlinear problems, especially for multimodal and high-dimensional problems. Moreover, to investigate the feasibility of EGO in practice, a material identification benchmark is designed to demonstrate the performance of EGO methods. According to the tests in this study, the kriging EGO is generally the most robust method.
Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits
Ljungberg, Kajsa; Mishchenko, Kateryna; Holmgren, Sverker
2010-01-01
We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called quantitative trait loci (QTL), and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms. PMID:21918629
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng
2016-02-01
Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.
Orbit optimization of Chang'E-2 by global adjustment using images of the moon
NASA Astrophysics Data System (ADS)
Yan, Wei; Liu, Jianjun; Ren, Xin; Wang, Fenfei; Wang, Wenrui; Li, Chunlai
2015-12-01
The orbit accuracy of the Chang'E-2 (CE-2) lunar probe is one of the most critical factors for a seamless mosaic of the global lunar topographic map. During the production of the CE-2 global lunar topographic map, a maximum deviation of kilometers magnitude existed in the horizontal direction of homologous points between neighboring images, while the maximum height deviation of these points is up to several hundred meters. This phenomenon indicates that current orbit determination results of CE-2 cannot truly reflect the relative position relationship between probe and lunar surface features. Against this background, global adjustment using images of the moon should be carried out to solve this problem. In this paper, the influence of CE-2 current orbit accuracy on the production of a global lunar topographic map will be analyzed based on the introduction of CE-2 observation data, including images and orbit data. Additionally, key technologies and technical processes of global adjustment using CE-2 images with high-resolution and large amounts of data will be researched. Finally, orbit optimization of CE-2 after global adjustment is analyzed, as well as the accuracy of the CE-2 global lunar topographic map for validation verification.
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2015-01-01
Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.
NASA Astrophysics Data System (ADS)
Igeta, Hideki; Hasegawa, Mikio
Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.
NASA Astrophysics Data System (ADS)
Kanazaki, Masahiro; Matsuno, Takashi; Maeda, Kengo; Kawazoe, Hiromitsu
2015-09-01
A kriging-based genetic algorithm called efficient global optimization (EGO) was employed to optimize the parameters for the operating conditions of plasma actuators. The aerodynamic performance was evaluated by wind tunnel testing to overcome the disadvantages of time-consuming numerical simulations. The proposed system was used on two design problems to design the power supply for a plasma actuator. The first case was the drag minimization problem around a semicircular cylinder. In this case, the inhibitory effect of flow separation was also observed. The second case was the lift maximization problem around a circular cylinder. This case was similar to the aerofoil design, because the circular cylinder has potential to work as an aerofoil owing to the control of the flow circulation by the plasma actuators with four design parameters. In this case, applicability to the multi-variant design problem was also investigated. Based on these results, optimum designs and global design information were obtained while drastically reducing the number of experiments required compared to a full factorial experiment.
Lens Design: An Attempt to Use `Escape Function' as a Tool in Global Optimization
NASA Astrophysics Data System (ADS)
Isshiki, Masaki; Ono, Hiroki; Nakadate, Suezou
1995-01-01
In designing lenses with the damped least squares method, the solution obtained by optimization routine is a local minimum of the merit function. To get out of this and seek a different solution, we propose to use an ‘escape function’ as an additional operand of the lens system, to be controlled. Experiments were made on simple models of merit function and the advantage of this technique was ascertained. We also planted this algorithm into OSLO SIX (lens design software by Sinclair Optics) by means of CCL (C-compatible language) and applied it to actual lens design. Experiments convinced us that the method would be an effective tool for global optimization.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-01-01
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA. PMID:26782395
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through
Multifactorial global search algorithm in the problem of optimizing a reactive force field
NASA Astrophysics Data System (ADS)
Stepanova, M. M.; Shefov, K. S.; Slavyanov, S. Yu.
2016-04-01
We present a new multifactorial global search algorithm ( MGSA) and check the operability of the algorithm on the Michalewicz and Rastrigin functions. We discuss the choice of an objective function and additional search criteria in the context of the problem of reactive force field ( ReaxFF) optimization and study the ranking of the ReaxFF parameters together with their impact on the objective function.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation
NASA Astrophysics Data System (ADS)
Bergeron, Dominic; Tremblay, A.-M. S.
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Economic optimization of a global strategy to address the pandemic threat.
Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C; Daszak, Peter
2014-12-30
Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral "One Health" pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual. PMID:25512538
NASA Technical Reports Server (NTRS)
Jaunky, N.; Ambur, D. R.; Knight, N. F., Jr.
1998-01-01
A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and strength constraints was developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory was used for the global analysis. Local buckling of skin segments were assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments were also assessed. Constraints on the axial membrane strain in the skin and stiffener segments were imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study were the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence and stiffening configuration, where stiffening configuration is a design variable that indicates the combination of axial, transverse and diagonal stiffener in the grid-stiffened cylinder. The design optimization process was adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configurations.
NASA Technical Reports Server (NTRS)
Jaunky, Navin; Knight, Norman F., Jr.; Ambur, Damodar R.
1998-01-01
A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and, strength constraints is developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory is used for the global analysis. Local buckling of skin segments are assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments are also assessed. Constraints on the axial membrane strain in the skin and stiffener segments are imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study are the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence, and stiffening configuration, where herein stiffening configuration is a design variable that indicates the combination of axial, transverse, and diagonal stiffener in the grid-stiffened cylinder. The design optimization process is adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads, and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configuration.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Espinet, Antoine; Pang, Min
2015-04-01
Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei
2015-06-15
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
2014-01-01
Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957
Tian, Chao; Liu, Shengchun
2016-02-22
We propose a simple and robust phase demodulation algorithm for two-shot fringe patterns with random phase shifts. Based on a smoothness assumption, the phase to be recovered is decomposed into a linear combination of finite terms of orthogonal polynomials, and the expansion coefficients and the phase shift are exhaustively searched through global optimization. The technique is insensitive to noise or defects, and is capable of retrieving phase from low fringe-number (less than one) or low-frequency interferograms. It can also cope with interferograms with very small phase shifts. The retrieved phase is continuous and no further phase unwrapping process is required. The method is expected to be promising to process interferograms with regular fringes, which are common in optical shop testing. Computer simulation and experimental results are presented to demonstrate the performance of the algorithm. PMID:26906984
Perera, Marlon; Lawrentschuk, Nathan; Romanic, Diana; Papa, Nathan; Bolton, Damien
2015-01-01
Background Journal clubs are an essential tool in promoting clinical evidence-based medical education to all medical and allied health professionals. Twitter represents a public, microblogging forum that can facilitate traditional journal club requirements, while also reaching a global audience, and participation for discussion with study authors and colleagues. Objective The aim of the current study was to evaluate the current state of social media–facilitated journal clubs, specifically Twitter, as an example of continuing professional development. Methods A systematic review of literature databases (Medline, Embase, CINAHL, Web of Science, ERIC via ProQuest) was performed according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A systematic search of Twitter, the followers of identified journal clubs, and Symplur was also performed. Demographic and monthly tweet data were extracted from Twitter and Symplur. All manuscripts related to Twitter-based journal clubs were included. Statistical analyses were performed in MS Excel and STATA. Results From a total of 469 citations, 11 manuscripts were included and referred to five Twitter-based journal clubs (#ALiEMJC, #BlueJC, #ebnjc, #urojc, #meded). A Twitter-based journal club search yielded 34 potential hashtags/accounts, of which 24 were included in the final analysis. The median duration of activity was 11.75 (interquartile range [IQR] 19.9, SD 10.9) months, with 7 now inactive. The median number of followers and participants was 374 (IQR 574) and 157 (IQR 272), respectively. An overall increasing establishment of active Twitter-based journal clubs was observed, resulting in an exponential increase in total cumulative tweets (R 2=.98), and tweets per month (R 2=.72). Cumulative tweets for specific journal clubs increased linearly, with @ADC_JC, @EBNursingBMJ, @igsjc, @iurojc, and @NephJC, and showing greatest rate of change, as well as total impressions per month since
MODIS-VIIRS Continuity: The Impact of Spatial Sampling on Global Land (Level-2) Products
NASA Astrophysics Data System (ADS)
Pahlevan, N.; Devadiga, S.; Lin, G.; Wolfe, R. E.; Roman, M. O.; Xiong, X.
2014-12-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi-NPP (S-NPP) has been providing daily global observations of the Earth surface since early 2012. With the decade-long observations made by MODIS onboard Terra and Aqua, one of the goals of the S-NPP mission is to provide continuity in producing land products that have been generated using heritage MODIS observations. The Land Data Operational Products Evaluation (LDOPE) team uses MODIS-derived products to evaluate land products obtained from VIIRS top-of-atmosphere (TOA) measurements generated through the Land Product Evaluation and Analysis Tool Element (LPEATE). However, due to inherent differences in their observation methods and the corresponding algorithms and post-processing techniques, the products generated from MODIS and VIIRS retain some discrepancies. Amongst all the differences between the two wide-swath radiometers, this study aims at analyzing the impact of differences in the corresponding spatial sampling. In particular, the VIIRS unique sampling scheme can introduce relative biases when comparing products (or observations) obtained from the two sensors. We use Landsat-8's Operational Land Imager (Level-1T data) scenes acquired within a set of 10 x 10 degree, i.e., "Golden tiles" (used for evaluation purposes by LDOPE) to examine how the discrepancies in the spatial responses manifest in measured radiances on a daily basis (for 16 days). The (band-detector averaged) prelaunch Line Spread Functions (LSFs) were used to represent spatial responses for each sensor. Although the impact of differences in sensors' spatial responses depends heavily on the spatial heterogeneity of a region-of-interest, the initial results, on average, indicate up to 0.8% and 5% difference (at the swath level) in the TOA radiances and TOA-based NDVI, respectively. The disparity (calculated for three sample scenes collected over the Golden sites) differs for different days (orbital configurations) and for
On the proper treatment of grid sensitivities in continuous adjoint methods for shape optimization
NASA Astrophysics Data System (ADS)
Kavvadias, I. S.; Papoutsis-Kiachagias, E. M.; Giannakoglou, K. C.
2015-11-01
The continuous adjoint method for shape optimization problems, in flows governed by the Navier-Stokes equations, can be formulated in two different ways, each of which leads to a different expression for the sensitivity derivatives of the objective function with respect to the control variables. The first formulation leads to an expression including only boundary integrals; it, thus, has low computational cost but, when used with coarse grids, its accuracy becomes questionable. The second formulation comprises a sum of boundary and field integrals; due to the field integrals, it has noticeably higher computational cost, obtaining though higher accuracy. In this paper, the equivalence of the two formulations is revisited from the mathematical and, particularly, the numerical point of view. Internal and external aerodynamics cases, in which the objective function is either the total pressure losses or the force exerted on a solid body, are examined and differences in the computed gradients are discussed. After identifying the reason behind these discrepancies, the adjoint formulation is enhanced by the adjoint to a (hypothetical) grid displacement model and the new approach is proved to reproduce the accuracy of the second adjoint formulation while maintaining the low cost of the first one.
Jerish Joyner, J; Yadav, B K
2015-12-01
Black gram kernels with three initial moisture contents (10, 14 & 18 % w.b.) were steam treated in a continuous steaming unit at three inlet steam pressures (2, 3 & 4 kg/cm(2)) for three grain residence times (2, 4 & 6 min) in order to determine best treatment condition for maximizing the dhal yield while limiting the colour change in acceptable range. The dhal yield, dehulling loss and the colour difference (Delta E*) of the dehulled dhal were found to vary respectively, from 56.4 to 78.8 %, 30.8 to 8.6 % and 2.1 to 9.5 with increased severity of treatment. Optimization was done in order to obtain higher dhal yield while limiting the colour difference (Delta E*) within acceptable range i.e. 2.0 to 3.5 using response surface methodology. The best condition was obtained with the samples having 13.1 % initial moisture treated with 4 kg/cm(2) for about 6 min to achieve a dhal yield of 71.2 % and dehulling loss of 15.5 %. PMID:26604354
NASA Astrophysics Data System (ADS)
Eastes, R.; Andersson, L.; McClintock, W.; Aksnes, A.; Anderson, D.; Burns, A.; Codrescu, M.; Daniell, R.; Eparvier, F.; Harvey, J.; Krywonos, A.; Lankton, M.; Lumpe, J.; Prölss, G.; Richmond, A.; Rusch, D.; Solomon, S.; Strickland, D.; Woods, T.
2006-12-01
Observations by the Global-scale Observations of the Limb and Disk (GOLD) experiment will provide both context for in-situ measurements made on the "Radiation Belt Mappers" and information necessary for understanding changes in the radiation belts. GOLD will produce ultraviolet (UV) images of the Earth from a geostationary satellite. It will give near real-time information, on time scales of an hour to a day, about the response of the ionosphere-thermosphere to the influence of the magnetosphere and to variations in solar irradiance. Examples of information GOLD can provide include the boundaries in the magnetosphere from auroral locations and the electric field strengths (throughout the day) in the equatorial region from observations of the equatorial arcs. Such information is needed for understanding variability in the radiation belts.
Optimization of murine small intestine leukocyte isolation for global immune phenotype analysis.
Goodyear, Andrew W; Kumar, Ajay; Dow, Steven; Ryan, Elizabeth P
2014-03-01
New efforts to understand complex interactions between diet, gut microbiota, and intestinal immunity emphasize the need for a standardized murine protocol that has been optimized for the isolation of lamina propria immune cells. In this study multiple mouse strains including BALB/c, 129S6/Sv/EvTac and ICR mice were utilized to develop an optimal protocol for global analysis of lamina propria leukocytes. Incubation temperature was found to significantly improve epithelial cell removal, while changes in media formulation had minor effects. Tissue weight was an effective method for normalization of solution volumes and incubation times. Collagenase digestion in combination with thermolysin was identified as the optimal method for release of leukocytes from tissues and global immunophenotyping, based on the criteria of minimizing marker cleavage, improving cell viability, and reagent cost. The effects of collagenase in combination with dispase or thermolysin on individual cell surface markers revealed diverse marker specific effects. Aggressive formulations cleaved CD8α, CD138, and B220 from the cell surface, and resulted in relatively higher expression levels of CD3, γδ TCR, CD5, DX5, Ly6C, CD11b, CD11c, MHC-II and CD45. Improved collagenase digestion significantly improved viability and reduced debris formation, eliminating the need for density gradient purification. Finally, we demonstrate that two different digestion protocols yield significant differences in detection of CD4(+) and CD8(+) T cells, NK cells, monocytes and interdigitating DC (iDC) populations, highlighting the importance and impact of cell collection protocols on assay outputs. The optimized protocol described herein will help assure the reproducibility and robustness of global assessment of lamina propria immune responses. Moreover, this technique may be applied to isolation of leukocytes from the entire gastrointestinal tract. PMID:24508527
Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of
NASA Astrophysics Data System (ADS)
Sun, Jianhua; Wan, Li
2005-08-01
Convergence dynamics of Cohen-Grossberg neural networks (CGNNs) with continuously distributed delays are discussed. Without assuming the differentiability and monotonicity of activation functions, the differentiability of amplification functions and the symmetry of synaptic interconnection weights, by skilfully constructing suitable Lyapunov functionals and employing inequality technique, three sets of easily verifiable delay independent criteria to guarantee the global exponential stability of a unique equilibrium point are given, and moreover, by constructing Poincaré mapping, other three sets of easily verifiable delay independent criteria to assure the existence and globally exponential stability of periodic solutions are obtained. Six examples are given to illustrate the theoretical results.
Song, Qiankun; Yan, Huan; Zhao, Zhenjiang; Liu, Yurong
2016-09-01
This paper investigates the stability problem for a class of impulsive complex-valued neural networks with both asynchronous time-varying and continuously distributed delays. By employing the idea of vector Lyapunov function, M-matrix theory and inequality technique, several sufficient conditions are obtained to ensure the global exponential stability of equilibrium point. When the impulsive effects are not considered, several sufficient conditions are also given to guarantee the existence, uniqueness and global exponential stability of equilibrium point. Two examples are given to illustrate the effectiveness and lower level of conservatism of the proposed criteria in comparison with some existing results. PMID:27239891
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with
Global-Local Analysis and Optimization of a Composite Civil Tilt-Rotor Wing
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masound
1999-01-01
This report gives highlights of an investigation on the design and optimization of a thin composite wing box structure for a civil tilt-rotor aircraft. Two different concepts are considered for the cantilever wing: (a) a thin monolithic skin design, and (b) a thick sandwich skin design. Each concept is examined with three different skin ply patterns based on various combinations of 0, +/-45, and 90 degree plies. The global-local technique is used in the analysis and optimization of the six design models. The global analysis is based on a finite element model of the wing-pylon configuration while the local analysis uses a uniformly supported plate representing a wing panel. Design allowables include those on vibration frequencies, panel buckling, and material strength. The design optimization problem is formulated as one of minimizing the structural weight subject to strength, stiffness, and d,vnamic constraints. Six different loading conditions based on three different flight modes are considered in the design optimization. The results of this investigation reveal that of all the loading conditions the one corresponding to the rolling pull-out in the airplane mode is the most stringent. Also the frequency constraints are found to drive the skin thickness limits, rendering the buckling constraints inactive. The optimum skin ply pattern for the monolithic skin concept is found to be (((0/+/-45/90/(0/90)(sub 2))(sub s))(sub s), while for the sandwich skin concept the optimal ply pattern is found to be ((0/+/-45/90)(sub 2s))(sub s).
Optimizing rice yields while minimizing yield-scaled global warming potential.
Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A
2014-05-01
To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. PMID:24115565
Global optimization approaches for finding the atomic structure of surfaces and nanowires
NASA Astrophysics Data System (ADS)
Ciobanu, Cristian
2007-03-01
In the cluster structure community, global optimization methods are common tools for seeking the structure of molecular and atomic clusters. The large number of local minima of the potential energy surface (PES) of these clusters, and the fact that these local minima proliferate exponentially with the number of atoms in the cluster simply demands the use of fast stochastic methods to find the optimum atomic configuration. Therefore, most of the development work has come from (and mostly stayed within) the cluster structure community. Partly due to wide availability and landmark successes of scanning tunneling microscopy (STM) and other high resolution microscopy techniques, finding the structure of periodically reconstructed semiconductor surfaces was not generally posed as a problem of stochastic optimization until recently [1], when we have shown that high-index semiconductor surfaces can have a rather large number of local minima with such low surface energies that the identification of the global minimum becomes problematic. We have therefore set out to develop global optimization methods for systems other than clusters, focusing on periodic systems in one- and two- dimensions as such systems currently occupy a central place in the field of nanoscience. In this talk, we review some of our recent work on global optimization methods (the parallel-tempering Monte Carlo method [1] and the genetic algorithm [2]) and show examples/results from two main problem categories: (a) the two-dimensional problem of determining the atomic configuration of clean semiconductor surfaces [1,2], and (b) finding the structure of freestanding nanowires [3]. While focused on mainly on atomic structure, our account will show examples of how these development efforts contributed to elucidating several physical problems and we will attempt to make a case for widespread use of these methods for structural problems in one and two dimenstions. [1]C.V. Ciobanu and C. Predescu, Reconstruction
Research on global path planning based on ant colony optimization for AUV
NASA Astrophysics Data System (ADS)
Wang, Hong-Jian; Xiong, Wei
2009-03-01
Path planning is an important issue for autonomous underwater vehicles (AUVs) traversing an unknown environment such as a sea floor, a jungle, or the outer celestial planets. For this paper, global path planning using large-scale chart data was studied, and the principles of ant colony optimization (ACO) were applied. This paper introduced the idea of a visibility graph based on the grid workspace model. It also brought a series of pheromone updating rules for the ACO planning algorithm. The operational steps of the ACO algorithm are proposed as a model for a global path planning method for AUV. To mimic the process of smoothing a planned path, a cutting operator and an insertion-point operator were designed. Simulation results demonstrated that the ACO algorithm is suitable for global path planning. The system has many advantages, including that the operating path of the AUV can be quickly optimized, and it is shorter, safer, and smoother. The prototype system successfully demonstrated the feasibility of the concept, proving it can be applied to surveys of unstructured unmanned environments.
CH4 parameter estimation in CLM4.5bgc using surrogate global optimization
NASA Astrophysics Data System (ADS)
Müller, J.; Paudel, R.; Shoemaker, C. A.; Woodbury, J.; Wang, Y.; Mahowald, N.
2015-10-01
Over the anthropocene methane has increased dramatically. Wetlands are one of the major sources of methane to the atmosphere, but the role of changes in wetland emissions is not well understood. The Community Land Model (CLM) of the Community Earth System Models contains a module to estimate methane emissions from natural wetlands and rice paddies. Our comparison of CH4 emission observations at 16 sites around the planet reveals, however, that there are large discrepancies between the CLM predictions and the observations. The goal of our study is to adjust the model parameters in order to minimize the root mean squared error (RMSE) between model predictions and observations. These parameters have been selected based on a sensitivity analysis. Because of the cost associated with running the CLM simulation (15 to 30 min on the Yellowstone Supercomputing Facility), only relatively few simulations can be allowed in order to find a near-optimal solution within an acceptable time. Our results indicate that the parameter estimation problem has multiple local minima. Hence, we use a computationally efficient global optimization algorithm that uses a radial basis function (RBF) surrogate model to approximate the objective function. We use the information from the RBF to select parameter values that are most promising with respect to improving the objective function value. We show with pseudo data that our optimization algorithm is able to make excellent progress with respect to decreasing the RMSE. Using the true CH4 emission observations for optimizing the parameters, we are able to significantly reduce the overall RMSE between observations and model predictions by about 50 %. The methane emission predictions of the CLM using the optimized parameters agree better with the observed methane emission data in northern and tropical latitudes. With the optimized parameters, the methane emission predictions are higher in northern latitudes than when the default parameters are
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Lemmel, S A; Heimsch, R C; Edwards, L L
1979-02-01
The yeasts Candida utilis and Saccharomycopsis fibuliger were propagated as a source of single-cell protein in a continuous, mixed, aerobic, single-stage cultivation on blancher water generated during potato processing. A series of steady-state experiments based on a two-level factorial design, half-replicate modified with an intermediate experiment, was performed to determine the effect of pH, 3.8 to 4.8; dissolved oxygen, 42 to 80% saturation; dilution rate, 0.17 to 0.31 h(-1); and temperature, 27 to 32 degrees C on the amount of carbon consumed, the rate of carbon consumption (R(c)), the amount of reducing sugar consumed, the rate of sugar consumption (R(g)), the amount of protein produced, the rate of protein production (R(p)), the yield from carbon, and the yield from reducing sugar. The results were analyzed by stepwise multiple regression and Fisher's least significant difference test. Analyses showed that high dilution rates resulted in increased R(c), R(g), and R(p) and indicated that a rate of 0.31 h(-1) was below the critical dilution rate. A temperature of 32 degrees C increased the amount of carbon consumed by 34%. A pH of 4.3 to 4.8 increased the amount of protein produced. The yield from carbon was constant, and the relatively high yield from reducing sugar indicated that other substrates were consumed. Dissolved oxygen was in excess at 42% saturation and above. Since C. utilis predominated the mixed cultures and amylase production appeared to be limited, a single-stage fermentation lacked efficiency. The experimental design allowed preliminary optimization of major environmental variables with relatively few experiments and provided a basis for future kinetic studies. PMID:35096
NASA Astrophysics Data System (ADS)
Mohamad, Sannay
2001-11-01
Convergence dynamics of continuous-time bidirectional neural networks with constant transmission delays are studied. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, Lyapunov functionals and Halanay-type inequalities are constructed and employed to derive delay independent sufficient conditions under which the continuous-time networks converge exponentially to the equilibria associated with temporally uniform external inputs to the networks. Discrete-time analogues of the continuous-time networks are formulated and we study their dynamical characteristics. It is shown that the convergence dynamics of the continuous-time networks are preserved by the discrete-time analogues without any restriction on the discretization step-size. Several examples are given to illustrate the advantages of the discrete-time analogues in numerically simulating the continuous-time networks.
Vector direction of filled function method on solving unconstrained global optimization problem
NASA Astrophysics Data System (ADS)
Napitupulu, Herlina; Mohd, Ismail Bin
2016-02-01
Filled function method is one of deterministic methods for solving global minimization problems. Filled function algorithm method generally contains of two main phases. First phase is to obtain local minimizer of objective function, second is to obtain minimizer or saddle point of filled function. In the second phase, vector direction plays an important role on finding stationary point of filled function, by assist in escaping from neighborhood of current minimizer of objective function of the first phase. In this paper, we introduce parameter free filled function and some typical vector direction to be applied in filled function algorithm. The algorithm method is implemented into some benchmark test functions. General computational and numerical results are presented to show the performance of each vector direction on filled function method for solving two dimensional unconstrained global optimization problems.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2015-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allow-ing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for com-putational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
2012-01-01
Background The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. Results This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. Conclusion The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by
Xu, Gongxian; Liu, Ying; Gao, Qunwang
2016-02-10
This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. PMID:26704728
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Using support vector machine and dynamic parameter encoding to enhance global optimization
NASA Astrophysics Data System (ADS)
Zheng, Z.; Chen, X.; Liu, C.; Huang, K.
2016-05-01
This study presents an approach which combines support vector machine (SVM) and dynamic parameter encoding (DPE) to enhance the run-time performance of global optimization with time-consuming fitness function evaluations. SVMs are used as surrogate models to partly substitute for fitness evaluations. To reduce the computation time and guarantee correct convergence, this work proposes a novel strategy to adaptively adjust the number of fitness evaluations needed according to the approximate error of the surrogate model. Meanwhile, DPE is employed to compress the solution space, so that it not only accelerates the convergence but also decreases the approximate error. Numerical results of optimizing a few benchmark functions and an antenna in a practical application are presented, which verify the feasibility, efficiency and robustness of the proposed approach.
NASA Astrophysics Data System (ADS)
Do, Khac Duc
2015-03-01
This paper presents a design of optimal controllers with respect to a meaningful cost function to force an underactuated omni-directional intelligent navigator (ODIN) under unknown constant environmental loads to track a reference trajectory in two-dimensional space. Motivated by the vehicle's steering practice, the yaw angle regarded as a virtual control plus the surge thrust force are used to force the position of the vehicle to globally track its reference trajectory. The control design is based on several recent results developed for inverse optimal control and stability analysis of nonlinear systems, a new design of bounded disturbance observers, and backstepping and Lyapunov's direct methods. Both state- and output-feedback control designs are addressed. Simulations are included to illustrate the effectiveness of the proposed results.
Lee, JongHyup; Pak, Dohyun
2016-01-01
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743
A genetic algorithm for first principles global structure optimization of supported nano structures
Vilhelmsen, Lasse B.; Hammer, Bjørk
2014-07-28
We present a newly developed publicly available genetic algorithm (GA) for global structure optimisation within atomic scale modeling. The GA is focused on optimizations using first principles calculations, but it works equally well with empirical potentials. The implementation is described and benchmarked through a detailed statistical analysis employing averages across many independent runs of the GA. This analysis focuses on the practical use of GA’s with a description of optimal parameters to use. New results for the adsorption of M{sub 8} clusters (M = Ru, Rh, Pd, Ag, Pt, Au) on the stoichiometric rutile TiO{sub 2}(110) surface are presented showing the power of automated structure prediction and highlighting the diversity of metal cluster geometries at the atomic scale.
Inversion of seismological data using a controlled random search global optimization technique
NASA Astrophysics Data System (ADS)
Shanker, K.; Mohan, C.; Khattri, K. N.
1991-11-01
Inversion problems in seismology deal with the estimation of the location and the time of occurrence of an earthquake from observations of the arrival time of the body waves. These problems can be regarded as non-linear optimization problems in which the objective function to be minimized is the discrepancy between the recorded arrival times and the calculated arrival times at a prescribed set of observation stations, as a function of the hypocentral parameters and the wave speed structure of the Earth. The objective of the present paper is to demonstrate the effectiveness of a controlled random search algorithm of global optimization (Shanker and Mohan, 1987; Mohan and Shanker, 1988) in solving such types of inversion problems. The performance of the algorithm has been tested on earthquake arrival time data of earthquakes recorded in the vicinity of local networks in the Garhwal Kumaon region of the Himalayas.
Global geometry optimization of silicon clusters described by three empirical potentials
NASA Astrophysics Data System (ADS)
Yoo, S.; Zeng, X. C.
2003-07-01
The "basic-hopping" global optimization technique developed by Wales and Doye is employed to study the global minima of silicon clusters Sin(3⩽n⩽30) with three empirical potentials: the Stillinger-Weber (SW), the modified Stillinger-Weber (MSW), and the Gong potentials. For the small-sized SW and Gong clusters (3⩽n⩽15), it is found that the global minima obtained based on the basin-hopping method are identical to those reported by using the genetic algorithm [Iwamatsu, J. Chem. Phys. 112, 10976 (2000)], as well as with those by using molecular dynamics and the steepest-descent quench (SDQ) method [Feuston, Kalia, and Vashishta, Phys. Rev. B 37, 6297 (1988)]. However, for the mid-sized SW clusters (16⩽n⩽20), the global minima obtained differ from those based on the SDQ method, e.g., the appearance of the endohedral atom with fivefold coordination starting at n=17, as opposed to n=19. For larger SW clusters (20⩽n⩽30), it is found that the "bulklike" endohedral atom with tetrahedral coordination starts at n=20. In particular, the overall structural features of SW Si21, Si23, Si25, and Si28 are nearly identical to the MSW counterparts. With the SW Si21 as the starting structure, a geometric optimization at the B3LYP/6-31G(d) level of density-functional theory yields an isomer similar to the ground-state- isomer of Si21 reported by Pederson et al. [Phys. Rev. B 54, 2863 (1996)].
Electronic neural network for solving traveling salesman and similar global optimization problems
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Moopenn, Alexander W. (Inventor); Duong, Tuan A. (Inventor); Eberhardt, Silvio P. (Inventor)
1993-01-01
This invention is a novel high-speed neural network based processor for solving the 'traveling salesman' and other global optimization problems. It comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. The array is prompted by analog voltages representing variables such as distances. The processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically.
Response of snow-dependent hydrologic extremes to continued global warming
Diffenbaugh, Noah S.; Scherer, Martin; Ashfaq, Moetasim
2013-01-01
Snow accumulation is critical for water availability in the northern hemisphere 1,2, raising concern that global warming could have important impacts on natural and human systems in snow-dependent regions 1,3. Although regional hydrologic changes have been observed (e.g., 1,3–5), the time of emergence of extreme changes in snow accumulation and melt remains a key unknown for assessing climate change impacts 3,6,7. We find that the CMIP5 global climate model ensemble exhibits an imminent shift towards low snow years in the northern hemisphere, with areas of western North America, northeastern Europe, and the Greater Himalaya showing the strongest emergence during the near-term decades and at 2°C global warming. The occurrence of extremely low snow years becomes widespread by the late-21st century, as do the occurrence of extremely high early-season snowmelt and runoff (implying increasing flood risk), and extremely low late-season snowmelt and runoff (implying increasing water stress). Our results suggest that many snow-dependent regions of the northern hemisphere are likely to experience increasing stress from low snow years within the next three decades, and from extreme changes in snow-dominated water resources if global warming exceeds 2°C above the pre-industrial baseline. PMID:24015153
2012-01-01
Background In workforces that are traditionally mobile and have long lead times for new supply, such as health, effective global indicators of tertiary education are increasingly essential. Difficulties with transportability of qualifications and cross-accreditation are now recognised as key barriers to meeting the rapidly shifting international demands for health care providers. The plethora of mixed education and service arrangements poses challenges for employers and regulators, let alone patients; in determining equivalence of training and competency between individuals, institutions and geographical locations. Discussion This paper outlines the shortfall of the current indicators in assisting the process of global certification and competency recognition in the health care workforce. Using Organisation for Economic Cooperation and Development (OECD) data we highlight how International standardisation in the tertiary education sector is problematic for the global health workforce. Through a series of case studies, we then describe a model which enables institutions to compare themselves internally and with others internationally using bespoke or prioritised parameters rather than standards. Summary The mobility of the global health workforce means that transportability of qualifications is an increasing area of concern. Valid qualifications based on workplace learning and assessment requires at least some variables to be benchmarked in order to judge performance. PMID:22776517
Response of snow-dependent hydrologic extremes to continued global warming
NASA Astrophysics Data System (ADS)
Diffenbaugh, Noah S.; Scherer, Martin; Ashfaq, Moetasim
2013-04-01
Snow accumulation is critical for water availability in the Northern Hemisphere, raising concern that global warming could have important impacts on natural and human systems in snow-dependent regions. Although regional hydrologic changes have been observed (for example, refs , , , ), the time of emergence of extreme changes in snow accumulation and melt remains a key unknown for assessing climate-change impacts. We find that the CMIP5 global climate model ensemble exhibits an imminent shift towards low snow years in the Northern Hemisphere, with areas of western North America, northeastern Europe and the Greater Himalaya showing the strongest emergence during the near-term decades and at 2°C global warming. The occurrence of extremely low snow years becomes widespread by the late twenty-first century, as do the occurrences of extremely high early-season snowmelt and runoff (implying increasing flood risk), and extremely low late-season snowmelt and runoff (implying increasing water stress). Our results suggest that many snow-dependent regions of the Northern Hemisphere are likely to experience increasing stress from low snow years within the next three decades, and from extreme changes in snow-dominated water resources if global warming exceeds 2°C above the pre-industrial baseline.
Response of snow-dependent hydrologic extremes to continued global warming.
Diffenbaugh, Noah S; Scherer, Martin; Ashfaq, Moetasim
2013-04-01
Snow accumulation is critical for water availability in the northern hemisphere (1,2), raising concern that global warming could have important impacts on natural and human systems in snow-dependent regions (1,3). Although regional hydrologic changes have been observed (e.g., (1,3-5)), the time of emergence of extreme changes in snow accumulation and melt remains a key unknown for assessing climate change impacts (3,6,7). We find that the CMIP5 global climate model ensemble exhibits an imminent shift towards low snow years in the northern hemisphere, with areas of western North America, northeastern Europe, and the Greater Himalaya showing the strongest emergence during the near-term decades and at 2°C global warming. The occurrence of extremely low snow years becomes widespread by the late-21(st) century, as do the occurrence of extremely high early-season snowmelt and runoff (implying increasing flood risk), and extremely low late-season snowmelt and runoff (implying increasing water stress). Our results suggest that many snow-dependent regions of the northern hemisphere are likely to experience increasing stress from low snow years within the next three decades, and from extreme changes in snow-dominated water resources if global warming exceeds 2°C above the pre-industrial baseline. PMID:24015153
NASA Technical Reports Server (NTRS)
Sabaka, T. J.; Rowlands, D. D.; Luthcke, S. B.; Boy, J.-P.
2010-01-01
We describe Earth's mass flux from April 2003 through November 2008 by deriving a time series of mas cons on a global 2deg x 2deg equal-area grid at 10 day intervals. We estimate the mass flux directly from K band range rate (KBRR) data provided by the Gravity Recovery and Climate Experiment (GRACE) mission. Using regularized least squares, we take into account the underlying process dynamics through continuous space and time-correlated constraints. In addition, we place the mascon approach in the context of other filtering techniques, showing its equivalence to anisotropic, nonsymmetric filtering, least squares collocation, and Kalman smoothing. We produce mascon time series from KBRR data that have and have not been corrected (forward modeled) for hydrological processes and fmd that the former produce superior results in oceanic areas by minimizing signal leakage from strong sources on land. By exploiting the structure of the spatiotemporal constraints, we are able to use a much more efficient (in storage and computation) inversion algorithm based upon the conjugate gradient method. This allows us to apply continuous rather than piecewise continuous time-correlated constraints, which we show via global maps and comparisons with ocean-bottom pressure gauges, to produce time series with reduced random variance and full systematic signal. Finally, we present a preferred global model, a hybrid whose oceanic portions are derived using forward modeling of hydrology but whose land portions are not, and thus represent a pure GRACE-derived signal.
Implementation of a near-optimal global set point control method in a DDC controller
Cascia, M.A.
2000-07-01
A near-optimal global set point control method that can be implemented in an energy management system's (EMS) DDC controller is described in this paper. Mathematical models are presented for the power consumption of electric chillers, hot water boilers, chilled and hot water pumps, and air handler fans, which allow the calculation of near-optimal chilled water, hot water, and coil discharge air set points to minimize power consumption, based on data collected by the EMS. Also optimized are the differential and static pressure set points for the variable speed pumps and fans. A pilot test of this control methodology was implemented for a cooling plant at a pharmaceutical manufacturing facility near Dallas, Texas. Data collected at this site showed good agreement between the actual power consumed by the chillers, chilled water pumps, and air handlers and that predicted by the models. An approximate model was developed to calculate real-time power savings in the DDC controller. A third-party energy accounting program was used to track savings due to the near-optimal control, and results show a monthly KWH reduction ranging from 3% to 14%.
Solving Continuous-Time Optimal-Control Problems with a Spreadsheet.
ERIC Educational Resources Information Center
Naevdal, Eric
2003-01-01
Explains how optimal control problems can be solved with a spreadsheet, such as Microsoft Excel. Suggests the method can be used by students, teachers, and researchers as a tool to find numerical solutions for optimal control problems. Provides several examples that range from simple to advanced. (JEH)
NASA Astrophysics Data System (ADS)
Acciavatti, Raymond J.; Maidment, Andrew D. A.
2012-03-01
In digital breast tomosynthesis (DBT), a reconstruction of the breast is generated from projections acquired over a limited range of x-ray tube angles. There are two principal schemes for acquiring projections, continuous tube motion and step-and-shoot motion. Although continuous tube motion has the benefit of reducing patient motion by lowering scan time, it has the drawback of introducing blurring artifacts due to focal spot motion. The purpose of this work is to determine the optimal scan time which minimizes this trade-off. To this end, the filtered backprojection reconstruction of a sinusoidal input is calculated. At various frequencies, the optimal scan time is determined by the value which maximizes the modulation of the reconstruction. Although prior authors have studied the dependency of the modulation on focal spot motion, this work is unique in also modeling patient motion. It is shown that because continuous tube motion and patient motion have competing influences on whether scan time should be long or short, the modulation is maximized by an intermediate scan time. This optimal scan time decreases with object velocity and increases with exposure time. To optimize step-and-shoot motion, we calculate the scan time for which the modulation attains the maximum value achievable in a comparable system with continuous tube motion. This scan time provides a threshold below which the benefits of step-and-shoot motion are justified. In conclusion, this work optimizes scan time in DBT systems with patient motion and either continuous tube motion or step-and-shoot motion by maximizing the modulation of the reconstruction.
A Global Scale 30m Water Surface Detection Optimized and Validated for Landsat 8
NASA Astrophysics Data System (ADS)
Pekel, J. F.; Cottam, A.; Clerici, M.; Belward, A.; Dubois, G.; Bartholome, E.; Gorelick, N.
2014-12-01
Life on Earth as we know it is impossible without water. Its importance to biological diversity, human well-being and the very functioning of the Earth-system cannot be overstressed, but we have remarkably little detailed knowledge concerning the spatial and temporal distribution of this vital resource. Earth observing satellites operating with high temporal revisits yet moderate spatial resolution have provided global datasets documenting spatial and temporal changes to water bodies on the Earth's surface. Landsat 8 has a data acquisition strategy such that global coverage of all land surfaces now occurs more frequently than from any preceding Landsat mission and provides 30 m resolution data. Whilst not the last word in temporal sampling this presents a basis for mapping and monitoring changes to global surface water resources at unprecedented levels of spatial detail. In this paper we provide a first 30 m resolution global synthesis of surface water occurrence, we document permanent water surfaces, seasonal water surfaces and always-dry surfaces. These products have been derived by optimizing a methodology previously developed for use with moderate resolution MODIS imagery for use with Landsat 8. The approach is based on a transformation of RGB color space into HSV combined with a sequence of cloud, topographic and temperature masks. Analysis at the global scale used the Google Earth Engine platform applied to all Landsat 8 acquisitions between June 2013 and June 2014. Systematic validation is done and demonstrated our ability to map surface water. Our method can be applied to other Landsat missions offering the potential to document changes in surface water over three decades; our study shows examples illustrating the capacity to map new water surfaces and ephemeral water surfaces in addition to the three previous classes. Thanks to an optimized data acquisition strategy, a full-free and open data policy and the processing capacity of the GEE global land
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme. PMID:24807039
NASA Astrophysics Data System (ADS)
Schmidt, Lennart; Krischer, Katharina
2015-06-01
We study an oscillatory medium with a nonlinear global coupling that gives rise to a harmonic mean-field oscillation with constant amplitude and frequency. Two types of cluster states are found, each undergoing a symmetry-breaking transition towards a related chimera state. We demonstrate that the diffusional coupling is non-essential for these complex dynamics. Furthermore, we investigate localized turbulence and discuss whether it can be categorized as a chimera state.
Response of snow-dependent hydrologic extremes to continued global warming
Diffenbaugh, Noah; Scherer, Martin; Ashfaq, Moetasim
2012-01-01
Snow accumulation is critical for water availability in the Northern Hemisphere1,2, raising concern that global warming could have important impacts on natural and human systems in snow-dependent regions1,3. Although regional hydrologic changes have been observed (for example, refs 1,3 5), the time of emergence of extreme changes in snow accumulation and melt remains a key unknown for assessing climate- change impacts3,6,7. We find that the CMIP5 global climate model ensemble exhibits an imminent shift towards low snow years in the Northern Hemisphere, with areas of western North America, northeastern Europe and the Greater Himalaya showing the strongest emergence during the near- termdecadesandat2 Cglobalwarming.Theoccurrenceof extremely low snow years becomes widespread by the late twenty-first century, as do the occurrences of extremely high early-season snowmelt and runoff (implying increasing flood risk), and extremely low late-season snowmelt and runoff (implying increasing water stress). Our results suggest that many snow-dependent regions of the Northern Hemisphere are likely to experience increasing stress from low snow years within the next three decades, and from extreme changes in snow-dominated water resources if global warming exceeds 2 C above the pre-industrial baseline.
NASA Astrophysics Data System (ADS)
Shaltev, M.
2016-02-01
The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.
Hierarchical Grid-based Multi-People Tracking-by-Detection With Global Optimization.
Chen, Lili; Wang, Wei; Panin, Giorgio; Knoll, Alois
2015-11-01
We present a hierarchical grid-based, globally optimal tracking-by-detection approach to track an unknown number of targets in complex and dense scenarios, particularly addressing the challenges of complex interaction and mutual occlusion. Frame-by-frame detection is performed by hierarchical likelihood grids, matching shape templates through a fast oriented distance transform. To allow recovery from misdetections, common heuristics such as nonmaxima suppression within observations is eschewed. Within a discretized state-space, the data association problem is formulated as a grid-based network flow model, resulting in a convex problem casted into an integer linear programming form, giving a global optimal solution. In addition, we show how a behavior cue (body orientation) can be integrated into our association affinity model, providing valuable hints for resolving ambiguities between crossing trajectories. Unlike traditional motion-based approaches, we estimate body orientation by a hybrid methodology, which combines the merits of motion-based and 3D appearance-based orientation estimation, thus being capable of dealing also with still-standing or slowly moving targets. The performance of our method is demonstrated through experiments on a large variety of benchmark video sequences, including both indoor and outdoor scenarios. PMID:26151936
Global design optimization for an axial-flow tandem pump based on surrogate method
NASA Astrophysics Data System (ADS)
Li, D. H.; Zhao, Y.; Y Wang, G.
2013-12-01
Tandem pump, compared with multistage pump, goes without guide vanes between impellers. Better cavitation performance and significant reduction of the axial geometry scale is important for high-speed propulsion. This study presents a global design optimization method based on surrogated method for an axial-flow tandem pump to enhance trade-off performances: energy and cavitation performances. At the same time, interactions between impellers and impacts on the performances are analyzed. Fixed angle of blades in impellers and phase angle are performed as design variables. Efficiency and minimum average pressure coefficient (MAPC) on axial sectional surface in front impeller are the objective function, which can represent energy and cavitation performances well. Different surrogate models are constructed, and Global Sensitivity Analysis and Pareto Front method are used. The results show that, 1) Influence from phase angle on performances can be neglected compared with other two design variables, 2) Impact ratio of fixed angle of blades in two impellers on efficiency are the same as their designed loading distributions, which is 4:6, 3) The optimization results can enhance the trade-off performances well: efficiency is improved by 0.6%, and the MAPC is improved by 4.5%.
CH4 parameter estimation in CLM4.5bgc using surrogate global optimization
NASA Astrophysics Data System (ADS)
Müller, J.; Paudel, R.; Shoemaker, C. A.; Woodbury, J.; Wang, Y.; Mahowald, N.
2015-01-01
Over the anthropocene methane has increased dramatically. Wetlands are one of the major sources of methane to the atmosphere, but the role of changes in wetland emissions is not well understood. The Community Land Model (CLM) of the Community Earth System Models contains a module to estimate methane emissions from natural wetlands and rice paddies. Our comparison of CH4 emission observations at 16 sites around the planet reveals, however, that there are large discrepancies between the CLM predictions and the observations. The goal of our study is to adjust the model parameters in order to minimize the root mean squared error (RMSE) between model predictions and observations. These parameters have been selected based on a sensitivity analysis. Because of the cost associated with running the CLM simulation (15 to 30 min on the Yellowstone Supercomputing Facility), only relatively few simulations can be allowed in order to find a near optimal solution within an acceptable time. Our results indicate that the parameter estimation problem has multiple local minima. Hence, we use a computationally efficient global optimization algorithm that uses a radial basis function (RBF) surrogate model to approximate the objective function. We use the information from the RBF to select parameter values that are most promising with respect to improving the objective function value. We show with pseudo data that our optimization algorithm is able to make excellent progress with respect to decreasing the RMSE. Using the true CH4 emission observations for optimizing the parameters, we are able to significantly reduce the overall RMSE between observations and model predictions by about 50%. The CLM predictions with the optimized parameters agree for northern and tropical latitudes more with the observed data than when using the default parameters and the emission predictions are higher than with default settings in northern latitudes and lower than default settings in the tropics.
ERIC Educational Resources Information Center
Ekanem, Ekpenyong E.; Ekpiken, William E.
2013-01-01
Continuous assessment is an important management tool for transforming university education. Although this policy employed measurable criteria to retain students' interest and objectivity, most academic staff of Nigerian universities lack basic knowledge and skills in test construction and interpretation and are thus, ineffective in continuous…
The Global Challenge in Basic Education: Why Continued Investment in Basic Education Is Important
ERIC Educational Resources Information Center
Mertaugh, Michael T.; Jimenez, Emmanuel Y.; Patrinos, Harry A.
2009-01-01
This paper documents the importance of continued investment in basic education and argues that investments need to be carefully targeted to address the constraints that limit the coverage and quality of education if they are to provide expected benefits. Part I begins with a discussion of the returns to investment in education. Part II then…
Export dynamics as an optimal growth problem in the network of global economy.
Caraglio, Michele; Baldovin, Fulvio; Stella, Attilio L
2016-01-01
We analyze export data aggregated at world global level of 219 classes of products over a period of 39 years. Our main goal is to set up a dynamical model to identify and quantify plausible mechanisms by which the evolutions of the various exports affect each other. This is pursued through a stochastic differential description, partly inspired by approaches used in population dynamics or directed polymers in random media. We outline a complex network of transfer rates which describes how resources are shifted between different product classes, and determines how casual favorable conditions for one export can spread to the other ones. A calibration procedure allows to fit four free model-parameters such that the dynamical evolution becomes consistent with the average growth, the fluctuations, and the ranking of the export values observed in real data. Growth crucially depends on the balance between maintaining and shifting resources to different exports, like in an explore-exploit problem. Remarkably, the calibrated parameters warrant a close-to-maximum growth rate under the transient conditions realized in the period covered by data, implying an optimal self organization of the global export. According to the model, major structural changes in the global economy take tens of years. PMID:27530505
Song, Zhaoliang; Parr, Jeffrey F.; Guo, Fengshan
2013-01-01
The occlusion of carbon (C) by phytoliths, the recalcitrant silicified structures deposited within plant tissues, is an important persistent C sink mechanism for croplands and other grass-dominated ecosystems. By constructing a silica content-phytolith content transfer function and calculating the magnitude of phytolith C sink in global croplands with relevant crop production data, this study investigated the present and potential of phytolith C sinks in global croplands and its contribution to the cropland C balance to understand the cropland C cycle and enhance long-term C sequestration in croplands. Our results indicate that the phytolith sink annually sequesters 26.35±10.22 Tg of carbon dioxide (CO2) and may contribute 40±18% of the global net cropland soil C sink for 1961–2100. Rice (25%), wheat (19%) and maize (23%) are the dominant contributing crop species to this phytolith C sink. Continentally, the main contributors are Asia (49%), North America (17%) and Europe (16%). The sink has tripled since 1961, mainly due to fertilizer application and irrigation. Cropland phytolith C sinks may be further enhanced by adopting cropland management practices such as optimization of cropping system and fertilization. PMID:24066067
Export dynamics as an optimal growth problem in the network of global economy
Caraglio, Michele; Baldovin, Fulvio; Stella, Attilio L.
2016-01-01
We analyze export data aggregated at world global level of 219 classes of products over a period of 39 years. Our main goal is to set up a dynamical model to identify and quantify plausible mechanisms by which the evolutions of the various exports affect each other. This is pursued through a stochastic differential description, partly inspired by approaches used in population dynamics or directed polymers in random media. We outline a complex network of transfer rates which describes how resources are shifted between different product classes, and determines how casual favorable conditions for one export can spread to the other ones. A calibration procedure allows to fit four free model-parameters such that the dynamical evolution becomes consistent with the average growth, the fluctuations, and the ranking of the export values observed in real data. Growth crucially depends on the balance between maintaining and shifting resources to different exports, like in an explore-exploit problem. Remarkably, the calibrated parameters warrant a close-to-maximum growth rate under the transient conditions realized in the period covered by data, implying an optimal self organization of the global export. According to the model, major structural changes in the global economy take tens of years. PMID:27530505
NASA Astrophysics Data System (ADS)
Hew, Y. M.; Linscott, I.; Close, S.
2015-12-01
Meteoroids and orbital debris, collectively referred to as hypervelocity impactors, travel between 7 and 72 km/s in free space. Upon their impact onto the spacecraft, the energy conversion from kinetic to ionization/vaporization occurs within a very brief timescale and results in a small and dense expanding plasma with a very strong optical flash. The radio frequency (RF) emission produced by this plasma can potentially lead to electrical anomalies within the spacecraft. In addition, space weather, such as solar activity and background plasma, can establish spacecraft conditions which can exaggerate the damages done by these impacts. During the impact, a very strong impact flash will be generated. Through the studying of this emission spectrum of the impact, we hope to study the impact generated gas cloud/plasma properties. The impact flash emitted from a ground-based hypervelocity impact test is long expected by many scientists to contain the characteristics of the impact generated plasma, such as plasma temperature and density. This paper presents a method for the time-resolved plasma temperature estimation using three-color visible band photometry data with a global pattern search optimization method. The equilibrium temperature of the plasma can be estimated using an optical model which accounts for both the line emission and continuum emission from the plasma. Using a global pattern search based optimizer, the model can isolate the contribution of the continuum emission versus the line emission from the plasma. The plasma temperature can thus be estimated. Prior to the optimization step, a Gaussian process is also applied to extract the optical emission signal out of the noisy background. The resultant temperature and line-to-continuum emission weighting factor are consistent with the spectrum of the impactor material and current literature.
Combined satellite systems for continuous global coverage in equatorial and polar circular orbits
NASA Astrophysics Data System (ADS)
Ulybyshev, S. Yu.
2015-07-01
A method is presented to design nonuniform satellite systems for global coverage using a combination of the equatorial and polar satellite groupings. Equations are derived for determining the basic design parameters of the entire satellite system and the conditions of its closure at the joint of the polar and equatorial segments. We analyze the constitutive features of such systems and their advantages and disadvantages in comparison with the most famous types of the polar phased and kinematically correct satellite systems. We consider versions of the nonuniform satellite systems with different flight altitude and the number of spacecraft in the equatorial and polar planes, as well as we present numerical examples.
Wu, Xiaodong; Dou, Xin; Wahle, Andreas; Sonka, Milan
2011-03-01
Efficient segmentation of globally optimal surfaces in volumetric images is a central problem in many medical image analysis applications. Intraclass variance has been successfully utilized for object segmentation, for instance, in the Chan-Vese model, especially for images without prominent edges. In this paper, we study the optimization problem of detecting a region (volume) between two coupled smooth surfaces by minimizing the intraclass variance using an efficient polynomial-time algorithm. Our algorithm is based on the shape probing technique in computational geometry and computes a sequence of minimum-cost closed sets in a derived parametric graph. The method has been validated on computer-synthetic volumetric images and in X-ray CT-scanned datasets of plexiglas tubes of known sizes. Its applicability to clinical data sets was also demonstrated. In all cases, the approach yielded highly accurate results. We believe that the developed technique is of interest on its own. We expect that it can shed some light on solving other important optimization problems arising in medical imaging. Furthermore, we report an approximation algorithm which runs much faster than the exact algorithm while yielding highly comparable segmentation accuracy. PMID:21118766
NASA Astrophysics Data System (ADS)
Huang, Zhipeng; Gao, Lihong; Wang, Yangwei; Wang, Fuchi
2016-06-01
The Johnson-Cook (J-C) constitutive model is widely used in the finite element simulation, as this model shows the relationship between stress and strain in a simple way. In this paper, a cluster global optimization algorithm is proposed to determine the J-C constitutive model parameters of materials. A set of assumed parameters is used for the accuracy verification of the procedure. The parameters of two materials (401 steel and 823 steel) are determined. Results show that the procedure is reliable and effective. The relative error between the optimized and assumed parameters is no more than 4.02%, and the relative error between the optimized and assumed stress is 0.2% × 10-5. The J-C constitutive parameters can be determined more precisely and quickly than the traditional manual procedure. Furthermore, all the parameters can be simultaneously determined using several curves under different experimental conditions. A strategy is also proposed to accurately determine the constitutive parameters.
Globally Optimal Base Station Clustering in Interference Alignment-Based Multicell Networks
NASA Astrophysics Data System (ADS)
Brandt, Rasmus; Mochaourab, Rami; Bengtsson, Mats
2016-04-01
Coordinated precoding based on interference alignment is a promising technique for improving the throughputs in future wireless multicell networks. In small networks, all base stations can typically jointly coordinate their precoding. In large networks however, base station clustering is necessary due to the otherwise overwhelmingly high channel state information (CSI) acquisition overhead. In this work, we provide a branch and bound algorithm for finding the globally optimal base station clustering. The algorithm is mainly intended for benchmarking existing suboptimal clustering schemes. We propose a general model for the user throughputs, which only depends on the long-term CSI statistics. The model assumes intracluster interference alignment and is able to account for the CSI acquisition overhead. By enumerating a search tree using a best-first search and pruning sub-trees in which the optimal solution provably cannot be, the proposed method converges to the optimal solution. The pruning is done using specifically derived bounds, which exploit some assumed structure in the throughput model. It is empirically shown that the proposed method has an average complexity which is orders of magnitude lower than that of exhaustive search.
Searchlight Correlation Detectors: Optimal Seismic Monitoring Using Regional and Global Networks
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Näsholm, Sven Peter
2015-04-01
The sensitivity of correlation detectors increases greatly when the outputs from multiple seismic traces are considered. For single-array monitoring, a zero-offset stack of individual correlation traces will provide significant noise suppression and enhanced sensitivity for a source region surrounding the hypocenter of the master event. The extent of this region is limited only by the decrease in waveform similarity with increasing hypocenter separation. When a regional or global network of arrays and/or 3-component stations is employed, the zero-offset approach is only optimal when the master and detected events are co-located exactly. In many monitoring situations, including nuclear test sites and geothermal fields, events may be separated by up to many hundreds of meters while still retaining sufficient waveform similarity for correlation detection on single channels. However, the traveltime differences resulting from the hypocenter separation may result in significant beam loss on the zero-offset stack and a deployment of many beams for different hypothetical source locations in geographical space is required. The beam deployment necessary for optimal performance of the correlation detectors is determined by an empirical network response function which is most easily evaluated using the auto-correlation functions of the waveform templates from the master event. The correlation detector beam deployments for providing optimal network sensitivity for the North Korea nuclear test site are demonstrated for both regional and teleseismic monitoring configurations.
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
GalaxyDock2: protein-ligand docking using beta-complex and global optimization.
Shin, Woong-Hee; Kim, Jae-Kwan; Kim, Deok-Soo; Seok, Chaok
2013-11-15
In this article, an enhanced version of GalaxyDock protein-ligand docking program is introduced. GalaxyDock performs conformational space annealing (CSA) global optimization to find the optimal binding pose of a ligand both in the rigid-receptor mode and the flexible-receptor mode. Binding pose prediction has been improved compared to the earlier version by the efficient generation of high-quality initial conformations for CSA using a predocking method based on a beta-complex derived from the Voronoi diagram of receptor atoms. Binding affinity prediction has also been enhanced by using the optimal combination of energy components, while taking into consideration the energy of the unbound ligand state. The new version has been tested in terms of binding mode prediction, binding affinity prediction, and virtual screening on several benchmark sets, showing improved performance over the previous version and AutoDock, on which the GalaxyDock energy function is based. GalaxyDock2 also performs better than or comparable to other state-of-the-art docking programs. GalaxyDock2 is freely available at http://galaxy.seoklab.org/softwares/galaxydock.html. PMID:24108416
NASA Technical Reports Server (NTRS)
Kurits, Inna; Lewis, M. J.; Hamner, M. P.; Norris, Joseph D.
2007-01-01
Heat transfer rates are an extremely important consideration in the design of hypersonic vehicles such as atmospheric reentry vehicles. This paper describes the development of a data reduction methodology to evaluate global heat transfer rates using surface temperature-time histories measured with the temperature sensitive paint (TSP) system at AEDC Hypervelocity Wind Tunnel 9. As a part of this development effort, a scale model of the NASA Crew Exploration Vehicle (CEV) was painted with TSP and multiple sequences of high resolution images were acquired during a five run test program. Heat transfer calculation from TSP data in Tunnel 9 is challenging due to relatively long run times, high Reynolds number environment and the desire to utilize typical stainless steel wind tunnel models used for force and moment testing. An approach to reduce TSP data into convective heat flux was developed, taking into consideration the conditions listed above. Surface temperatures from high quality quantitative global temperature maps acquired with the TSP system were then used as an input into the algorithm. Preliminary comparison of the heat flux calculated using the TSP surface temperature data with the value calculated using the standard thermocouple data is reported.
Optimal experimental designs for dose-response studies with continuous endpoints.
Holland-Letz, Tim; Kopp-Schneider, Annette
2015-11-01
In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation. PMID:25155192
Comparison of batch and continuous multi-column protein A capture processes by optimal design.
Baur, Daniel; Angarita, Monica; Müller-Späth, Thomas; Steinebach, Fabian; Morbidelli, Massimo
2016-07-01
Multi-column capture processes show several advantages compared to batch capture. It is however not evident how many columns one should use exactly. To investigate this issue, twin-column CaptureSMB, 3- and 4-column periodic counter-current chromatography (PCC) and single column batch capture are numerically optimized and compared in terms of process performance for capturing a monoclonal antibody using protein A chromatography. Optimization is carried out with respect to productivity and capacity utilization (amount of product loaded per cycle compared to the maximum amount possible), while keeping yield and purity constant. For a wide range of process parameters, all three multi-column processes show similar maximum capacity utilization and performed significantly better than batch. When maximizing productivity, the CaptureSMB process shows optimal performance, except at high feed titers, where batch chromatography can reach higher productivity values than the multi-column processes due to the complete decoupling of the loading and elution steps, albeit at a large cost in terms of capacity utilization. In terms of trade-off, i.e. how much the capacity utilization decreases with increasing productivity, CaptureSMB is optimal for low and high feed titers, whereas the 3-column process is optimal in an intermediate region. Using these findings, the most suitable process can be chosen for different production scenarios. PMID:26992151
Prusa, Joseph
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the physics of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer- reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.
NASA Astrophysics Data System (ADS)
Hafner, K.; Davis, J. P.; Wilson, D.; Woodward, R.
2015-12-01
The Global Seismographic Network (GSN) is a 151 station, globally distributed permanent network of state-of-the-art seismological and geophysical sensors that is a result of an ongoing successful partnership between IRIS, the USGS, the University of California at San Diego, NSF and numerous host institutions worldwide. In recent years, the GSN has standardized their dataloggers to the Quanterra Q330HR data acquisition system at all but three stations. Current equipment modernization efforts are focused on the development of a new very broadband borehole sensor to replace failing KS-54000 instruments and replacing the aging Streckeisen STS-1 surface instruments at many GSN stations. Aging of GSN equipment and discoveries of quality problems with GSN data (e.g., the long period response of the STS-1 sensors) have resulted in the GSN placing major emphasis on quantifying, validating and maintaining data quality. This has resulted in the implementation of MUSTANG and DQA systems for analyzing GSN data quality and enables both network operators and data end users to quickly characterize the performance of stations and networks. We will present summary data quality metrics for the GSN as obtained via these quality assurance tools. Data from the GSN are used not only for research, but on a daily basis are part of the operational missions of the USGS NEIC, NOAA tsunami warning centers, the Comprehensive Nuclear-Test-Ban-Treaty Organization as well as other organizations. The primary challenges for the GSN include maintaining these operational capabilities while simultaneously developing and replacing the primary borehole sensors, replacing as needed the primary vault sensors, maintaining high quality data and repairing station infrastructure, all during a period of very tight federal budgets. We will provide an overview of the operational status of the GSN, with a particular emphasis on the status of the primary borehole and vault sensors.
NASA Astrophysics Data System (ADS)
Kavvadias, I. S.; Papoutsis-Kiachagias, E. M.; Dimitrakopoulos, G.; Giannakoglou, K. C.
2015-11-01
In this article, the gradient of aerodynamic objective functions with respect to design variables, in problems governed by the incompressible Navier-Stokes equations coupled with the k-ω SST turbulence model, is computed using the continuous adjoint method, for the first time. Shape optimization problems for minimizing drag, in external aerodynamics (flows around isolated airfoils), or viscous losses in internal aerodynamics (duct flows) are considered. Sensitivity derivatives computed with the proposed adjoint method are compared to those computed with finite differences or a continuous adjoint variant based on the frequently used assumption of frozen turbulence; the latter proves the need for differentiating the turbulence model. Geometries produced by optimization runs performed with sensitivities computed by the proposed method and the 'frozen turbulence' assumption are also compared to quantify the gain from formulating and solving the adjoint to the turbulence model equations.
Optimizing Orbit-Instrument Configuration for Global Precipitation Mission (GPM) Satellite Fleet
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Adams, James; Baptista, Pedro; Haddad, Ziad; Iguchi, Toshio; Im, Eastwood; Kummerow, Christian; Einaudi, Franco (Technical Monitor)
2001-01-01
Following the scientific success of the Tropical Rainfall Measuring Mission (TRMM) spearheaded by a group of NASA and NASDA scientists, their external scientific collaborators, and additional investigators within the European Union's TRMM Research Program (EUROTRMM), there has been substantial progress towards the development of a new internationally organized, global scale, and satellite-based precipitation measuring mission. The highlights of this newly developing mission are a greatly expanded scope of measuring capability and a more diversified set of science objectives. The mission is called the Global Precipitation Mission (GPM). Notionally, GPM will be a constellation-type mission involving a fleet of nine satellites. In this fleet, one member is referred to as the "core" spacecraft flown in an approximately 70 degree inclined non-sun-synchronous orbit, somewhat similar to TRMM in that it carries both a multi-channel polarized passive microwave radiometer (PMW) and a radar system, but in this case it will be a dual frequency Ku-Ka band radar system enabling explicit measurements of microphysical DSD properties. The remainder of fleet members are eight orbit-synchronized, sun-synchronous "constellation" spacecraft each carrying some type of multi-channel PMW radiometer, enabling no worse than 3-hour diurnal sampling over the entire globe. In this configuration the "core" spacecraft serves as a high quality reference platform for training and calibrating the PMW rain retrieval algorithms used with the "constellation" radiometers. Within NASA, GPM has advanced to the pre-formulation phase which has enabled the initiation of a set of science and technology studies which will help lead to the final mission design some time in the 2003 period. This presentation first provides an overview of the notional GPM program and mission design, including its organizational and programmatic concepts, scientific agenda, expected instrument package, and basic flight
"Best of Change" Continued...What's Ahead for Higher Education: Opportunities for Optimism.
ERIC Educational Resources Information Center
Bowen, Howard R.
1994-01-01
A 1984 essay, originally published at the outset of a recession, finds that, although significant problems can be predicted for higher education, there is also cause for celebration: institutions of higher education serve the nation well, and public attitudes are positive. Based on this stability, a guarded optimism about higher education's future…
Neural Network-Based Adaptive Optimal Controller - A Continuous-Time Formulation
NASA Astrophysics Data System (ADS)
Vrabie, Draguna; Lewis, Frank; Levine, Daniel
We present a new online adaptive control scheme, for partially unknown nonlinear systems, which converges to the optimal state-feedback control solution for affine in the input nonlinear systems. The main features of the algorithm map on the characteristics of the rewards-based decision making process in the mammal brain.
Automated reconstruction of dendritic and axonal trees by global optimization with geometric priors.
Türetken, Engin; González, Germán; Blum, Christian; Fua, Pascal
2011-09-01
We present a novel probabilistic approach to fully automated delineation of tree structures in noisy 2D images and 3D image stacks. Unlike earlier methods that rely mostly on local evidence, ours builds a set of candidate trees over many different subsets of points likely to belong to the optimal tree and then chooses the best one according to a global objective function that combines image evidence with geometric priors. Since the best tree does not necessarily span all the points, the algorithm is able to eliminate false detections while retaining the correct tree topology. Manually annotated brightfield micrographs, retinal scans and the DIADEM challenge datasets are used to evaluate the performance of our method. We used the DIADEM metric to quantitatively evaluate the topological accuracy of the reconstructions and showed that the use of the geometric regularization yields a substantial improvement. PMID:21573886
Multi-view stereo image synthesis using binocular symmetry-based global optimization
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jung, Yong Ju; Yoon, Soo Sung; Ro, Yong Man
2015-03-01
This paper presents a new multi-view stereo image synthesis using binocular symmetric hole filling. In autostereoscopic displays, multi-view synthesis is needed to provide multiple perspectives of the same scene, as viewed from multiple viewing positions. In the warped image at a distant virtual viewpoint, it is difficult to generate visually plausible multi-view stereo images in multi-view synthesis since very large hole regions (i.e., disoccluded regions) could be induced. Also, binocular asymmetry between the synthesized left-eye and right-eye images is one of the critical factors, which leads to a visual discomfort in stereoscopic viewing. In this paper, we maintain the binocular symmetry using the already filled regions in an adjacent view. The proposed method introduces a binocular symmetric hole filling based on the global optimization for binocular symmetry in the synthesized multi-view stereo images. The experimental results showed that the proposed method outperformed those of the existing methods.
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Lightsey, E. Glenn; Markley, F. Landis
1998-01-01
In this paper, a new and efficient algorithm is developed for attitude determination from Global Positioning System signals. The new algorithm is derived from a generalized nonlinear predictive filter for nonlinear systems. This uses a one time-step ahead approach to propagate a simple kinematics model for attitude determination. The advantages of the new algorithm over previously developed methods include: it provides optimal attitudes even for coplanar baseline configurations; it guarantees convergence even for poor initial conditions; it is a non-iterative algorithm; and it is computationally efficient. These advantages clearly make the new algorithm well suited to on-board applications. The performance of the new algorithm is tested on a dynamic hardware simulator. Results indicate that the new algorithm accurately estimates the attitude of a moving vehicle, and provides robust attitude estimates even when other methods, such as a linearized least-squares approach, fail due to poor initial starting conditions.
Globally optimal rotation alignment of spherical surfaces with associated scalar values
NASA Astrophysics Data System (ADS)
Pan, Rongjiang; Skala, Vaclav; Müller, Rolf
2013-09-01
We propose a new approach to global optimization algorithm based on controlled random search techniques for rotational alignment of spherical surfaces with associated scalar values. To reduce the distortion in correspondence and increase efficiency, the spherical surface is first re-sampled using a geodesic sphere. The rotation in space is represented using the modified Rodrigues parameters. Correspondence between two spherical surfaces is implemented in the parametric domain. We applied the methods to the alignment of beam patterns computed from the outer ear shapes of bats. The proposed method is compared with other approaches such as principal component analysis (PCA), exhaustive search in the discrete space of rotations defined by Euler angles and direct search using uniform samples over the special orthogonal group of rotations in 3D space. Experimental results demonstrate that the rotation alignment obtained using the proposed algorithm has a high degree of precision and gives the best results among the four approaches. [Figure not available: see fulltext.
Dill, K.A.; Phillips, A.T.; Rosen, J.B.
1997-12-01
Proteins require specific three-dimensional conformations to function properly. These {open_quotes}native{close_quotes} conformations result primarily from intramolecular interactions between the atoms in the macromolecule, and also intermolecular interactions between the macromolecule and the surrounding solvent. Although the folding process can be quite complex, the instructions guiding this process are specified by the one-dimensional primary sequence of the protein or nucleic acid: external factors, such as helper (chaperone) proteins, present at the time of folding have no effect on the final state of the protein. Many denatured proteins spontaneously refold into functional conformations once denaturing conditions are removed. Indeed, the existence of a unique native conformation, in which residues distant in sequence but close in proximity exhibit a densely packed hydrophobic core, suggests that this three-dimensional structure is largely encoded within the sequential arrangement of these specific amino acids. In any case, the native structure is often the conformation at the global minimum energy. In addition to the unique native (minimum energy) structure, other less stable structures exist as well, each with a corresponding potential energy. These structures, in conjunction with the native structure, make up an energy landscape that can be used to characterize various aspects of the protein structure. 22 refs., 10 figs., 2 tabs.
NASA Astrophysics Data System (ADS)
Schlutz, Juergen; Hufenbach, Bernhard; Laurini, Kathy; Spiero, Francois
2016-07-01
Future space exploration goals call for sending humans and robots beyond low Earth orbit and establishing sustained access to destinations such as the Moon, asteroids and Mars. Space agencies participating in the International Space Exploration Coordination Group (ISECG) are discussing an international approach for achieving these goals, documented in ISECG's Global Exploration Roadmap (GER). The GER reference scenario reflects a step-wise evolution of critical capabilities from ISS to missions in the lunar vicinity in preparation for the journey of humans to Mars. As ISECG agencies advance their individual planning, they also advance the mission themes and reference architecture of the GER to consolidate common goals, near-term mission scenarios and initial opportunities for collaboration. In this context, particular focus has been given to the Better understanding and further refinement of cislunar infrastructure and potential lunar transportation architecture Interaction with international science communities to identify and articulate the scientific opportunities of the near-term exploration mission themes Coordination and consolidation of interest in lunar polar volatiles prospecting and potential for in-situ resource utilisation Identification and articulation of the benefits from exploration and the technology transfer activities The paper discusses the ongoing roadmapping activity of the ISECG agencies. It provides an insight into the status of the above activities and an outlook towards the evolution of the GER that is currently foreseen in the 2017 timeframe.
Local-global analysis of crack growth in continuously reinforced ceramic matrix composites
NASA Technical Reports Server (NTRS)
Ballarini, Roberto; Ahmed, Shamin
1988-01-01
The development is described of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-globe analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring like representation of the matrix, fibers and interfaces. This region is embedded in an anisotropic continuum (representing the bulk composite) which is modeled by conventional finite elements. Parametric studies are conducted to investigate the effects of LHR size, component properties, interface conditions, etc. on the strength and sequence of the failure processes in the unidirectional composite system. The results are compared with those predicted by the models developed by Marshall et al. (1985) and by Budiansky et al. (1986).
Guo, Chengan; Yang, Qingshan
2015-07-01
Finding the optimal solution to the constrained l0 -norm minimization problems in the recovery of compressive sensed signals is an NP-hard problem and it usually requires intractable combinatorial searching operations for getting the global optimal solution, unless using other objective functions (e.g., the l1 norm or lp norm) for approximate solutions or using greedy search methods for locally optimal solutions (e.g., the orthogonal matching pursuit type algorithms). In this paper, a neurodynamic optimization method is proposed to solve the l0 -norm minimization problems for obtaining the global optimum using a recurrent neural network (RNN) model. For the RNN model, a group of modified Gaussian functions are constructed and their sum is taken as the objective function for approximating the l0 norm and for optimization. The constructed objective function sets up a convexity condition under which the neurodynamic system is guaranteed to obtain the globally convergent optimal solution. An adaptive adjustment scheme is developed for improving the performance of the optimization algorithm further. Extensive experiments are conducted to test the proposed approach in this paper and the output results validate the effectiveness of the new method. PMID:25122603
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach. PMID:25415951
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, R.; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Optimizing global CO concentrations and emissions based on DART/CAM-CHEM
NASA Astrophysics Data System (ADS)
Gaubert, B.; Arellano, A. F.; Barre, J.; Worden, H. M.; Emmons, L. K.; Wiedinmyer, C.; Anderson, J. L.; Deeter, M. N.; Mizzi, A. P.; Edwards, D. P.
2014-12-01
Atmospheric Carbon Monoxide (CO) is an important trace gas in tropospheric chemistry through its impact on the oxidizing capacity of the troposphere, as precursor of ozone, and as a good tracer of combustion from both anthropogenic sources and wildfires. We will investigate the potential of the assimilation of TERRA/MOPITT observations to constrain the regional to global CO budget using DART (Data assimilation Research Testbed) together with the global Community Atmospheric Model (CAM-Chem). DART/CAM-Chem is based on an ensemble adjustment Kalman filter (EAKF) framework which facilitates statistical estimation of error correlations between chemical states (CO and related species) and parameters (including sources) in the model using the ensemble statistics derived from dynamical and chemical perturbations in the model. Here, we estimate CO emissions within DART/CAM-Chem using a state augmentation approach where CO emissions are added to the CO state vector being analyzed. We compare these optimized emissions to estimates derived from a traditional Bayesian synthesis inversion using the CO analyses (assimilated CO states) as observational constraints. The spatio-temporal distribution of CO and other chemical species will be compared to profile measurements from aircraft and other satellite instruments (e.g., INTEX-B, ARCTAS).
NASA Astrophysics Data System (ADS)
Mitroi, V.; de Coninck, A.; Vinçon-Leite, B.; Deroubaix, J.-F.
2014-09-01
The (re)construction of the ecological continuity is stated as one of the main objectives of the European Water Framework Directive for watershed management in Europe. Analysing the social, political, technical and scientific processes characterising the implementation of different projects of ecological continuity in two adjacent peri-urban territories in Ile-de-France, we observed science-driven approaches disregarding the social contexts. We show that, in urbanized areas, ecological continuity requires not only important technical and ecological expertise, but also social and political participation to the definition of a common vision and action plan. Being a challenge for both, technical water management institutions and "classical" ecological policies, we propose some social science contributions to deal with ecological unpredictability and reconsider stakeholder resistance to this kind of project.
Optimal estimation for global ground-level fine particulate matter concentrations
NASA Astrophysics Data System (ADS)
Donkelaar, Aaron; Martin, Randall V.; Spurr, Robert J. D.; Drury, Easan; Remer, Lorraine A.; Levy, Robert C.; Wang, Jun
2013-06-01
We develop an optimal estimation (OE) algorithm based on top-of-atmosphere reflectances observed by the MODIS satellite instrument to retrieve near-surface fine particulate matter (PM2.5). The GEOS-Chem chemical transport model is used to provide prior information for the Aerosol Optical Depth (AOD) retrieval and to relate total column AOD to PM2.5. We adjust the shape of the GEOS-Chem relative vertical extinction profiles by comparison with lidar retrievals from the CALIOP satellite instrument. Surface reflectance relationships used in the OE algorithm are indexed by land type. Error quantities needed for this OE algorithm are inferred by comparison with AOD observations taken by a worldwide network of sun photometers (AERONET) and extended globally based upon aerosol speciation and cross correlation for simulated values, and upon land type for observational values. Significant agreement in PM2.5 is found over North America for 2005 (slope = 0.89; r = 0.82; 1-σ error = 1 µg/m3 + 27%), with improved coverage and correlation relative to previous work for the same region and time period, although certain subregions, such as the San Joaquin Valley of California are better represented by previous estimates. Independently derived error estimates of the OE PM2.5 values at in situ locations over North America (of ±(2.5 µg/m3 + 31%) and Europe of ±(3.5 µg/m3 + 30%) are corroborated by comparison with in situ observations, although globally (error estimates of ±(3.0 µg/m3 + 35%), may be underestimated. Global population-weighted PM2.5 at 50% relative humidity is estimated as 27.8 µg/m3 at 0.1° × 0.1° resolution.
Optimization of gas dynamic and power parameters for continuous nuclear pumped laser
NASA Astrophysics Data System (ADS)
Korzenev, A. N.; Sizov, A. N.
2008-01-01
Optimization studies of optical and power performances of nuclear pumped lasers are performed. It is shown that the laser mix pump rate speed-up from 7 to 30 m/s and laser channel width reduction from 2 to 1 cm allows increasing the average specific energy input by the factor of 1.6 and narrowing the refraction factor measuring interval for 4 times.
Need for coordinated programs to improve global health by optimizing salt and iodine intake.
Campbell, Norm R C; Dary, Omar; Cappuccio, Francesco P; Neufeld, Lynnette M; Harding, Kim B; Zimmermann, Michael B
2012-10-01
High dietary salt is a major cause of increased blood pressure, the leading risk for death worldwide. The World Health Organization (WHO) has recommended that salt intake be less than 5 g/day, a goal that only a small proportion of people achieve. Iodine deficiency can cause cognitive and motor impairment and, if severe, hypothyroidism with serious mental and growth retardation. More than 2 billion people worldwide are at risk of iodine deficiency. Preventing iodine deficiency by using salt fortified with iodine is a major global public health success. Programs to reduce dietary salt are technically compatible with programs to prevent iodine deficiency through salt fortification. However, for populations to fully benefit from optimum intake of salt and iodine, the programs must be integrated. This review summarizes the scientific basis for salt reduction and iodine fortification programs, the compatibility of the programs, and the steps that need to be taken by the WHO, national governments, and nongovernmental organizations to ensure that populations fully benefit from optimal intake of salt and iodine. Specifically, expert groups must be convened to help countries implement integrated programs and context-specific case studies of successfully integrated programs; lessons learned need to be compiled and disseminated. Integrated surveillance programs will be more efficient and will enhance current efforts to optimize intake of iodine and salt. For populations to fully benefit, governments need to place a high priority on integrating these two important public health programs. PMID:23299289
A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.
Ali, Ahmed F; Tawhid, Mohamed A
2016-01-01
Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988
Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw
2002-01-01
In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.
Multiple actor-critic structures for continuous-time optimal control using input-output data.
Song, Ruizhuo; Lewis, Frank; Wei, Qinglai; Zhang, Hua-Guang; Jiang, Zhong-Ping; Levine, Dan
2015-04-01
In industrial process control, there may be multiple performance objectives, depending on salient features of the input-output data. Aiming at this situation, this paper proposes multiple actor-critic structures to obtain the optimal control via input-output data for unknown nonlinear systems. The shunting inhibitory artificial neural network (SIANN) is used to classify the input-output data into one of several categories. Different performance measure functions may be defined for disparate categories. The approximate dynamic programming algorithm, which contains model module, critic network, and action network, is used to establish the optimal control in each category. A recurrent neural network (RNN) model is used to reconstruct the unknown system dynamics using input-output data. NNs are used to approximate the critic and action networks, respectively. It is proven that the model error and the closed unknown system are uniformly ultimately bounded. Simulation results demonstrate the performance of the proposed optimal control scheme for the unknown nonlinear system. PMID:25730830
NASA Astrophysics Data System (ADS)
Mulder, W. A.; Shamasundar, R.
2016-07-01
We consider isotropic elastic wave propagation with continuous mass-lumped finite elements on tetrahedra with explicit time stepping. These elements require higher-order polynomials in their interior to preserve accuracy after mass lumping and are only known up to degree 3. Global assembly of the symmetric stiffness matrix is a natural approach but requires large memory. Local assembly on the fly, in the form of matrix-vector products per element at each time step, has a much smaller memory footprint. With dedicated expressions for local assembly, our code ran about 1.3 times faster for degree 2 and 1.9 times for degree 3 on a simple homogeneous test problem, using 24 cores. This is similar to the acoustic case. For a more realistic problem, the gain in efficiency was a factor 2.5 for degree 2 and 3 for degree 3. For the lowest degree, the linear element, the expressions for both the global and local assembly can be further simplified. In that case, global assembly is more efficient than local assembly. Among the three degrees, the element of degree 3 is the most efficient in terms of accuracy at a given cost.
NASA Technical Reports Server (NTRS)
Shepperd, Stanley W.
1988-01-01
A family of functions involving integrals of universal functions is introduced. These functions have some interesting mathematical properties including the fact that they may be expressed as Gaussian continued fractions. A unique method of performing the integration is demonstrated which indicates why these functions may be important in the variation of Kepler's equation.
ERIC Educational Resources Information Center
Foley, Greg
2011-01-01
Continuous feed and bleed ultrafiltration, modeled with the gel polarization model for the limiting flux, is shown to provide a rich source of non-linear algebraic equations that can be readily solved using numerical and graphical techniques familiar to undergraduate students. We present a variety of numerical problems in the design, analysis, and…
Pivot method for global optimization: A study of structures and phase changes in water clusters
NASA Astrophysics Data System (ADS)
Nigra, Pablo Fernando
In this thesis, we have carried out a study of water clusters. The research work has been developed in two stages. In the first stage, we have investigated the properties of water clusters at zero temperature by means of global optimization. The clusters were modeled by using two well known pairwise potentials having distinct characteristics. One is the Matsuoka-Clementi-Yoshimine potential (MCY) that is an ab initio fitted function based on a rigid-molecule model, the other is the Sillinger-Rahman potential (SR) which is an empirical function based on a flexible-molecule model. The algorithm used for the global optimization of the clusters was the pivot method, which was developed in our group. The results have shown that, under certain conditions, the pivot method may yield optimized structures which are related to one another in such a way that they seem to form structural families. The structures in a family can be thought of as formed from the aggregation of single units. The particular types of structures we have found are quasi-one dimensional tubes built from stacking cyclic units such as tetramers, pentamers, and hexamers. The binding energies of these tubes form sequences that span smooth curves with clear asymptotic behavior; therefore, we have also studied the sequences applying the Bulirsch-Stoer (BST) algorithm to accelerate convergence. In the second stage of the research work, we have studied the thermodynamic properties of a typical water cluster at finite temperatures. The selected cluster was the water octamer which exhibits a definite solid-liquid phase change. The water octamer also has several low lying energy cubic structures with large energetic barriers that cause ergodicity breaking in regular Monte Carlo simulations. For that reason we have simulated the octamer using paralell tempering Monte Carlo combined with the multihistogram method. This has permited us to calculate the heat capacity from very low temperatures up to T = 230 K. We
Recursive Ant Colony Global Optimization: a new technique for the inversion of geophysical data
NASA Astrophysics Data System (ADS)
Gupta, D. K.; Gupta, J. P.; Arora, Y.; Singh, U. K.
2011-12-01
We present a new method called Recursive Ant Colony Global Optimization (RACO) technique, a modified form of general ACO, which can be used to find the best solutions to inversion problems in geophysics. RACO simulates the social behaviour of ants to find the best path between the nest and the food source. A new term depth has been introduced, which controls the extent of recursion. A selective number of cities get qualified for the successive depth. The results of one depth are used to construct the models for the next depth and the range of values for each of the parameters is reduced without any change to the number of models. The three additional steps performed after each depth, are the pheromone tracking, pheromone updating and city selection. One of the advantages of RACO over ACO is that if a problem has multiple solutions, then pheromone accumulation will take place at more than one city thereby leading to formation of multiple nested ACO loops within the ACO loop of the previous depth. Also, while the convergence of ACO is almost linear, RACO shows exponential convergence and hence is faster than the ACO. RACO proves better over some other global optimization techniques, as it does not require any initial values to be assigned to the parameters function. The method has been tested on some mathematical functions, synthetic self-potential (SP) and synthetic gravity data. The obtained results reveal the efficiency and practicability of the method. The method is found to be efficient enough to solve the problems of SP and gravity anomalies due to a horizontal cylinder, a sphere, an inclined sheet and multiple idealized bodies buried inside the earth. These anomalies with and without noise were inverted using the RACO algorithm. The obtained results were compared with those obtained from the conventional methods and it was found that RACO results are more accurate. Finally this optimization technique was applied to real field data collected over the Surda
Uauy, Ricardo; Corvalan, Camila; Dangour, Alan D
2009-02-01
Optimal health and well-being are now considered the true measures of human development. Integrated strategies for infant, child and adult nutrition are required that take a life-course perspective to achieve life-long health. The major nutrition challenges faced today include: (a) addressing the pending burden of undernutrition (low birth weight, severe wasting, stunting and Zn, retinol, Fe, iodine and folic acid deficits) affecting those individuals living in conditions of poverty and deprivation; (b) preventing nutrition-related chronic diseases (obesity, diabetes, CVD, some forms of cancer and osteoporosis) that, except in sub-Saharan Africa, are the main causes of death and disability globally. This challenge requires a life-course perspective as effective prevention starts before conception and continues at each stage of life. While death is unavoidable, premature death and disability can be postponed by providing the right amount and quality of food and by maintaining an active life; (c) delaying or avoiding, via appropriate nutrition and physical activity interventions, the functional declines associated with advancing age. To help tackle these challenges, it is proposed that the term 'malnutrition in all its forms', which encompasses the full spectrum of nutritional disorders, should be used to engender a broader understanding of global nutrition problems. This term may prove particularly helpful when interacting with policy makers and the public. Finally, a greater effort by the UN agencies and private and public development partners is called for to strengthen local, regional and international capacity to support the much needed change in policy and programme activities focusing on all forms of malnutrition with a unified agenda. PMID:19012808
Optimizing Global Coronal Magnetic Field Models Using Image-based Constraints
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Davila, Joseph M.; Uritsky, Vadim
2016-04-01
The coronal magnetic field directly or indirectly affects a majority of the phenomena studied in the heliosphere. It provides energy for coronal heating, controls the release of coronal mass ejections, and drives heliospheric and magnetospheric activity, yet the coronal magnetic field itself has proven difficult to measure. This difficulty has prompted a decades-long effort to develop accurate, timely, models of the field—an effort that continues today. We have developed a method for improving global coronal magnetic field models by incorporating the type of morphological constraints that could be derived from coronal images. Here we report promising initial tests of this approach on two theoretical problems, and discuss opportunities for application.
Using R for Global Optimization of a Fully-distributed Hydrologic Model at Continental Scale
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Zajac, Z.; Salamon, P.
2013-12-01
Nowadays hydrologic model simulations are widely used to better understand hydrologic processes and to predict extreme events such as floods and droughts. In particular, the spatially distributed LISFLOOD model is currently used for flood forecasting at Pan-European scale, within the European Flood Awareness System (EFAS). Several model parameters can not be directly measured, and they need to be estimated through calibration, in order to constrain simulated discharges to their observed counterparts. In this work we describe how the free software 'R' has been used as a single environment to pre-process hydro-meteorological data, to carry out global optimization, and to post-process calibration results in Europe. Historical daily discharge records were pre-processed for 4062 stream gauges, with different amount and distribution of data in each one of them. The hydroTSM, raster and sp R packages were used to select ca. 700 stations with an adequate spatio-temporal coverage. Selected stations span a wide range of hydro-climatic characteristics, from arid and ET-dominated watersheds in the Iberian Peninsula to snow-dominated watersheds in Scandinavia. Nine parameters were selected to be calibrated based on previous expert knowledge. Customized R scripts were used to extract observed time series for each catchment and to prepare the input files required to fully set up the calibration thereof. The hydroPSO package was then used to carry out a single-objective global optimization on each selected catchment, by using the Standard Particle Swarm 2011 (SPSO-2011) algorithm. Among the many goodness-of-fit measures available in the hydroGOF package, the Nash-Sutcliffe efficiency was used to drive the optimization. User-defined functions were developed for reading model outputs and passing them to the calibration engine. The long computational time required to finish the calibration at continental scale was partially alleviated by using 4 multi-core machines (with both GNU
Closing the loop from continuous M-health monitoring to fuzzy logic-based optimized recommendations.
Benharref, Abdelghani; Serhani, Mohamed Adel; Nujum, Al Ramzana
2014-01-01
Continuous sensing of health metrics might generate a massive amount of data. Generating clinically validated recommendations, out of these data, to patients under monitoring is of prime importance to protect them from risk of falling into severe health degradation. Physicians also can be supported with automated recommendations that gain from historical data and increasing learning cycles. In this paper, we propose a Fuzzy Expert System that relies on data collected from continuous monitoring. The monitoring scheme implements preprocessing of data for better data analytics. However, data analytics implements the loopback feature in order to constantly improve fuzzy rules, knowledge base, and generated recommendations. Both techniques reduced data quantity, improved data quality and proposed recommendations. We evaluate our solution through a series of experiments and the results we have obtained proved that our fuzzy expert system combined with the intelligent monitoring and analytic techniques provide a high accuracy of collected data and valid advices. PMID:25570547
NASA Astrophysics Data System (ADS)
Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer
2011-10-01
Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra
Xu, Dong; Zhang, Yang
2012-01-01
Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565
Deriase, S F; Farahat, L M; El-Batal, A I
2001-01-01
In the present study the optimized parameters for highest ethanol productivity by Kluyveromyces lactis immobilized cells bioreactor were obtained using the method of Lagrange multipliers. Immobilized growing yeast cells in PVA: HEMA (7%: 10%, w/w) hydrogel copolymer carrier produced by radiation polymerization were used in a packed-bed column reactor for the continuous production of ethanol from lactose at different levels of concentrations (50, 100 and 150) gL(-1). The results indicate that volumetric ethanol productivity is influenced by substrate concentration and dilution rate. The highest value 7.17 gL(-1) h(-1) is obtained at higher lactose concentration (150 gL(-1)) in feed medium and 0.3 h(-1) dilution rate. The same results have been obtained through the application of "LINGO" software for mathematical optimization. PMID:11518393
2012-01-01
Background An increasing number of emergency medicine (EM) residency training programs have residents interested in participating in clinical rotations in other countries. However, the policies that each individual training program applies to this process are different. To our knowledge, little has been done in the standardization of these experiences to help EM residency programs with the evaluation, administration and implementation of a successful global health clinical elective experience. The objective of this project was to assess the current status of EM global health electives at residency training programs and to establish recommendations from educators in EM on the best methodology to implement successful global health electives. Methods During the 2011 Council of Emergency Medicine Residency Directors (CORD) Academic Assembly, participants met to address this issue in a mediated discussion session and working group. Session participants examined data previously obtained via the CORD online listserve, discussed best practices in global health applications, evaluations and partnerships, and explored possible solutions to some of the challenges. In addition a survey was sent to CORD members prior to the 2011 Academic Assembly to evaluate the resources and processes for EM residents’ global experiences. Results Recommendations included creating a global health working group within the organization, optimizing a clearinghouse of elective opportunities for residents and standardizing elective application materials, site evaluations and resident assessment/feedback methods. The survey showed that 71.4% of respondents have global health partnerships and electives. However, only 36.7% of programs require pre-departure training, and only 20% have formal competency requirements for these global health electives. Conclusions A large number of EM training programs have global health experiences available, but these electives and the trainees may benefit from
OPTIMAL STRATEGIES FOR CONTINUOUS GRAVITATIONAL WAVE DETECTION IN PULSAR TIMING ARRAYS
Ellis, J. A.; Siemens, X.; Creighton, J. D. E.
2012-09-10
Supermassive black hole binaries (SMBHBs) are expected to emit a continuous gravitational wave signal in the pulsar timing array (PTA) frequency band (10{sup -9} to 10{sup -7} Hz). The development of data analysis techniques aimed at efficient detection and characterization of these signals is critical to the gravitational wave detection effort. In this paper, we leverage methods developed for LIGO continuous wave gravitational searches and explore the use of the F-statistic for such searches in pulsar timing data. Babak and Sesana have used this approach in the context of PTAs to show that one can resolve multiple SMBHB sources in the sky. Our work improves on several aspects of prior continuous wave search methods developed for PTA data analysis. The algorithm is implemented fully in the time domain, which naturally deals with the irregular sampling typical of PTA data and avoids spectral leakage problems associated with frequency domain methods. We take into account the fitting of the timing model and have generalized our approach to deal with both correlated and uncorrelated colored noise sources. We also develop an incoherent detection statistic that maximizes over all pulsar-dependent contributions to the likelihood. To test the effectiveness and sensitivity of our detection statistics, we perform a number of Monte Carlo simulations. We produce sensitivity curves for PTAs of various configurations and outline an implementation of a fully functional data analysis pipeline. Finally, we present a derivation of the likelihood maximized over the gravitational wave phases at the pulsar locations, which results in a vast reduction of the search parameter space.
Chapman, Julia L; Serinel, Yasmina; Marshall, Nathaniel S; Grunstein, Ronald R
2016-09-01
Excessive daytime sleepiness (EDS) is common in obstructive sleep apnea (OSA), but it is also common in the general population. When sleepiness remains after continuous positive airway pressure (CPAP) treatment of OSA, comorbid conditions or permanent brain injury before CPAP therapy may be the cause of the residual sleepiness. There is currently no broad approach to treating residual EDS in patients with OSA. Individual assessment must be made of comorbid conditions and medications, and of lifestyle factors that may be contributing to the sleepiness. Modafinil and armodafinil are the only pharmacologic agents indicated for residual sleepiness in these patients. PMID:27542881
NASA Astrophysics Data System (ADS)
Zhang, X.; Cai, X.; Zhu, T.
2013-12-01
Biofuels is booming in recent years due to its potential contributions to energy sustainability, environmental improvement and economic opportunities. Production of biofuels not only competes for land and water with food production, but also directly pushes up food prices when crops such as maize and sugarcane are used as biofuels feedstock. Meanwhile, international trade of agricultural commodities exports and imports water and land resources in a virtual form among different regions, balances overall water and land demands and resource endowment, and provides a promising solution to the increasingly severe food-energy competition. This study investigates how to optimize water and land resources uses for overall welfare at global scale in the framework of 'virtual resources'. In contrast to partial equilibrium models that usually simulate trades year-by-year, this optimization model explores the ideal world where malnourishment is minimized with optimal resources uses and trade flows. Comparing the optimal production and trade patterns with historical data can provide meaningful implications regarding how to utilize water and land resources more efficiently and how the trade flows would be changed for overall welfare at global scale. Valuable insights are obtained in terms of the interactions among food, water and bioenergy systems. A global hydro-economic optimization model is developed, integrating agricultural production, market demands (food, feed, fuel and other), and resource and environmental constraints. Preliminary results show that with the 'free market' mechanism and land as well as water resources use optimization, the malnourished population can be reduced by as much as 65%, compared to the 2000 historical value. Expected results include: 1) optimal trade paths to achieve global malnourishment minimization, 2) how water and land resources constrain local supply, 3) how policy affects the trade pattern as well as resource uses. Furthermore, impacts of
Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg
2013-12-01
The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450
Visual Feedback of Continuous Bedside Pressure Mapping to Optimize Effective Patient Repositioning
Scott, Ronald G.; Thurman, Kristen M.
2014-01-01
Objective: To evaluate the effectiveness of a new bedside pressure mapping technology for patient repositioning in a long-term acute care hospital. Approach: Bedside caregivers repositioned patients to the best of their abilities, using pillows and positioning aids without the visual feedback from a continuous bedside pressure mapping (CBPM) system. Once positioned, caregivers were shown the image from the CBPM system and allowed to make further adjustments to the patient position. Data from the CBPM device, in the form of visual screenshots and peak pressure values, were obtained after each repositioning phase. Caregivers provided feedback on repositioning with and without the CBPM system. Results: Screenshots displayed lower pressures when the visual feedback from the CBPM systems was utilized by caregivers. Lower peak pressure measurements were also evident when caregivers utilized the image from the CBPM systems. Overall, caregivers felt the system enabled more effective patient positioning and increased the quality of care they provided their patients. Innovation: This is the first bedside pressure mapping device to be continuously used in a clinical setting to provide caregivers and patients visual, instant feedback of pressure, thereby enhancing repositioning and offloading practices. Conclusion: With the visual feedback from the pressure mapping systems, caregivers were able to more effectively reposition patients, decreasing exposure to damaging high pressures. PMID:24804157
NASA Astrophysics Data System (ADS)
Scirè Mammano, Giovanni; Dragoni, Eugenio
2015-04-01
A relatively unexplored but extremely attractive field for the application of the shape memory technology is the area of rotary actuators, especially for generating continuous rotations. This paper deals with a novel design of a rotary motor based on SMA wires and overrunning clutches which features high output torque and boundless angular stroke in a compact package. The concept uses a long SMA wire wound round a low-friction cylindrical drum upon which the wire can contract and extend with minimum effort and limited space demand. Fitted to the output shaft by means of an overrunning clutch the output shaft rotates unidirectionally despite the sequence of contractions-elongation cycles of the wire. Following a design procedure developed in a former paper, a six-stage miniature prototype is built and tested showing excellent performance in terms of torque, speed and power density. Characteristic performances of the motor are as follows: size envelope = 48×22×30 mm3; maximum torque = 20 Nmm; specific torque = 6.31×10-4 Nmm/mm3; rotation per module = 15 deg; continuous speed (unloaded) = 4 rpm.
Covariance and crossover matrix guided differential evolution for global numerical optimization.
Li, YongLi; Feng, JinFu; Hu, JunHua
2016-01-01
Differential evolution (DE) is an efficient and robust evolutionary algorithm and has wide application in various science and engineering fields. DE is sensitive to the selection of mutation and crossover strategies and their associated control parameters. However, the structure and implementation of DEs are becoming more complex because of the diverse mutation and crossover strategies that use distinct parameter settings during the different stages of the evolution. A novel strategy is used in this study to improve the crossover and mutation operations. The crossover matrix, instead of a crossover operator and its control parameter CR, is proposed to implement the function of the crossover operation. Meanwhile, Gaussian distribution centers the best individuals found in each generation based on the proposed covariance matrix, which is generated between the best individual and several better individuals. Improved mutation operator based on the crossover matrix is randomly selected to generate the trial population. This operator is used to generate high-quality solutions to improve the capability of exploitation and enhance the preference of exploration. In addition, the memory population is randomly chosen from previous generation and used to control the search direction in the novel mutation strategy. Accordingly, the diversity of the population is improved. Thus, CCDE, which is a novel efficient and simple DE variant, is presented in this paper. CCDE has been tested on 30 benchmarks and 5 real-world optimization problems from the IEEE Congress on Evolutionary Computation (CEC) 2014 and CEC 2011, respectively. Experimental and statistical results demonstrate the effectiveness of CCDE for global numerical and engineering optimization. CCDE can solve the test benchmark functions and engineering problems more successfully than the other DE variants and algorithms from CEC 2014. PMID:27512635
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
Rigla, Mercedes
2011-01-01
Although current systems for continuous glucose monitoring (CGM) are the result of progressive technological improvement, and although a beneficial effect on glucose control has been demonstrated, few patients are using them. Something similar has happened to telemedicine (TM); in spite of the long-term experience, which began in the early 1980s, no TM system has been widely adopted, and presential visits are still almost the only way diabetologists and patients communicate. The hypothesis developed in this article is that neither CGM nor TM will ever be routinely implemented separately, and their consideration as essential elements for standard diabetes care will one day come from their integration as parts of a telemedical monitoring platform. This platform, which should include artificial intelligence for giving decision support to patients and physicians, will represent the core of a more complex global agent for diabetes care, which will provide control algorithms and risk analysis among other essential functions. PMID:21303626
NASA Astrophysics Data System (ADS)
Lera, Daniela; Sergeyev, Yaroslav D.
2015-06-01
In this paper, the global optimization problem miny∈S F (y) with S being a hyperinterval in RN and F (y) satisfying the Lipschitz condition with an unknown Lipschitz constant is considered. It is supposed that the function F (y) can be multiextremal, non-differentiable, and given as a 'black-box'. To attack the problem, a new global optimization algorithm based on the following two ideas is proposed and studied both theoretically and numerically. First, the new algorithm uses numerical approximations to space-filling curves to reduce the original Lipschitz multi-dimensional problem to a univariate one satisfying the Hölder condition. Second, the algorithm at each iteration applies a new geometric technique working with a number of possible Hölder constants chosen from a set of values varying from zero to infinity showing so that ideas introduced in a popular DIRECT method can be used in the Hölder global optimization. Convergence conditions of the resulting deterministic global optimization method are established. Numerical experiments carried out on several hundreds of test functions show quite a promising performance of the new algorithm in comparison with its direct competitors.
Optimal estimation of regional N2O emissions using a three-dimensional global model
NASA Astrophysics Data System (ADS)
Huang, J.; Golombek, A.; Prinn, R.
2004-12-01
In this study, we use the MATCH (Model of Atmospheric Transport and Chemistry) model and Kalman filtering techniques to optimally estimate N2O emissions from seven source regions around the globe. The MATCH model was used with NCEP assimilated winds at T62 resolution (192 longitude by 94 latitude surface grid, and 28 vertical levels) from July 1st 1996 to December 31st 2000. The average concentrations of N2O in the lowest four layers of the model were then compared with the monthly mean observations from six national/global networks (AGAGE, CMDL (HATS), CMDL (CCGG), CSIRO, CSIR and NIES), at 48 surface sites. A 12-month-running-mean smoother was applied to both the model results and the observations, due to the fact that the model was not able to reproduce the very small observed seasonal variations. The Kalman filter was then used to solve for the time-averaged regional emissions of N2O for January 1st 1997 to June 30th 2000. The inversions assume that the model stratospheric destruction rates, which lead to a global N2O lifetime of 130 years, are correct. It also assumes normalized emission spatial distributions from each region based on previous studies. We conclude that the global N2O emission flux is about 16.2 TgN/yr, with {34.9±1.7%} from South America and Africa, {34.6±1.5%} from South Asia, {13.9±1.5%} from China/Japan/South East Asia, {8.0±1.9%} from all oceans, {6.4±1.1%} from North America and North and West Asia, {2.6±0.4%} from Europe, and {0.9±0.7%} from New Zealand and Australia. The errors here include the measurement standard deviation, calibration differences among the six groups, grid volume/measurement site mis-match errors estimated from the model, and a procedure to account approximately for the modeling errors.
Leslie, P; Jung, R T; Isles, T E; Baty, J; Newton, R W; Illingworth, P
1986-01-01
To assess the role of insulin in the control of body weight energy expenditure was measured by indirect calorimetry in eight patients of normal weight with type I diabetes initially while poorly controlled during conventional insulin treatment and later during optimal glycaemic control achieved by using the continuous subcutaneous insulin infusion pump. Their response to seven days of fat supplementation was also assessed and the results compared with those in eight non-diabetic subjects. After a mean of 5.3 months of continuous subcutaneous insulin infusion the diabetic subjects had gained on average 3.5 kg. In the poorly controlled diabetic state the resting metabolic rate was raised but decreased by a mean of 374 kJ (90 kcal) per 24 hours with optimal glycaemic control. The thermic response to infused noradrenaline was reduced by 59% in the diabetic subjects, was not improved by continuous subcutaneous insulin infusion, but was improved when three of the subjects were given metformin in addition. The diabetic subjects had no abnormality in the thermic response to a meal while taking their usual diabetic diet. During fat supplementation, however, this thermic response was reduced when glycaemic control was poor but not when control was precise. Fat supplementation did not alter the resting metabolic rate or the reduced noradrenergic thermic response in the diabetic subjects. These findings suggest that precise glycaemic control could produce weight gain if energy intake remained unaltered, for diabetic subjects do not compensate for the decrease in metabolic rate by an increase in noradrenergic and dietary thermic responses. Also precise glycaemic control using continuous subcutaneous insulin infusion does not correct all the metabolic abnormalities of diabetes mellitus. PMID:3094802
Optimization of scrap tire pyrolysis using a continuous-feed steam environment
Burrell, T.W.; Frank, S.R.; Rich, M.L.
1995-12-01
Estimates of the generation of scrap tires produced in the United States are on the order of 2 million tons per year. Although these tires contain a high percentage of useful hydrocarbons, steel and carbon black, approximately 70% are not effectively recycled. Recently, pyrolytic recycling of scrap tire (thermal decomposition in the absence of O{sub 2}) is receiving renewed interest because of its ability to produce valuable hydrocarbon products. We have developed a process which permits a continuous feed processing of scrap tires in a non-combustible stream environment. This system utilizes a soft seal system that operates at atmospheric pressures while minimizing any fugitive emissions. This process increases the efficiency and control of present approaches by lowering the energy requirements while maximizing the collection of valuable products. Initial bench-scale results will be presented.
Shan, Hai; Yasuda, Toshiyuki; Ohkura, Kazuhiro
2015-06-01
The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively. PMID:25982071
NASA Astrophysics Data System (ADS)
Wei, Qing-Lai; Song, Rui-Zhuo; Sun, Qiu-Ye; Xiao, Wen-Dong
2015-09-01
This paper estimates an off-policy integral reinforcement learning (IRL) algorithm to obtain the optimal tracking control of unknown chaotic systems. Off-policy IRL can learn the solution of the HJB equation from the system data generated by an arbitrary control. Moreover, off-policy IRL can be regarded as a direct learning method, which avoids the identification of system dynamics. In this paper, the performance index function is first given based on the system tracking error and control error. For solving the Hamilton-Jacobi-Bellman (HJB) equation, an off-policy IRL algorithm is proposed. It is proven that the iterative control makes the tracking error system asymptotically stable, and the iterative performance index function is convergent. Simulation study demonstrates the effectiveness of the developed tracking control method. Project supported by the National Natural Science Foundation of China (Grant Nos. 61304079 and 61374105), the Beijing Natural Science Foundation, China (Grant Nos. 4132078 and 4143065), the China Postdoctoral Science Foundation (Grant No. 2013M530527), the Fundamental Research Funds for the Central Universities, China (Grant No. FRF-TP-14-119A2), and the Open Research Project from State Key Laboratory of Management and Control for Complex Systems, China (Grant No. 20150104).
Denault, J.; Guillemenet, J.
1996-12-31
The objective of this work was to optimize the processing conditions of polypropylene/carbon, PP/C, and polypropylene/glass, PP/G, composites. Investigation of the effects of molding parameters such as molding temperature and residence time and cooling rate on the tensile performance of PP/C and PP/G was undertaken. It is well known that the mechanical performance of composite based on thermoplastic matrix such as polypropylene is closely related to crystalline morphology which is dependent on the thermal history. Since the compression molding process involves kinetic behavior of systems undergoing phase transformations under non-isothermal conditions, the crystallization behavior of PP matrix in the presence of carbon and glass fibers was investigated under non-isothermal conditions. The effects of processing temperature, residence time and cooling rate on the crystallization temperature, degree of crystallinity, crystallization rate and kinetics of crystallization were analyzed. The tensile behavior of the {+-}45{degrees} laminate of PP/C and PP/G and their interfacial properties were evaluated as a function of molding parameters. The variation in the tensile strength of the {+-}45{degrees} laminates as a function of molding temperature was found to show three distinct regions: the tensile strength first increases with molding temperature, attains a plateau region, and finally decreases at high molding temperature. DSC analysis done in order to simulate phase transformation under non-isothermal conditions also revealed similar behavior suggesting a close relationship between mechanical performance and matrix properties.
Rice, J.A.; Hazelton, C.S.; Haun, M.J.
1996-12-31
Blackglas{trademark} matrix/ceramic fiber composites are being developed for an Rf transmission window application. The window must have a low dielectric loss factor to improve energy efficiency and reduce internal heating. Requirements also include moderate strength up to 500{degrees}C and high thermal shock resistance. The intent is to replace the current alumina window with a composite part which has comparable low loss characteristics, enhanced mechanical properties, and a non-catastrophic failure mode. While it is easy to fabricate a composite with a carbon interfacial coating which meets the mechanical property specifications, the dielectric loss factor is several orders of magnitude too high. Alternate coatings can be applied that meet the electrical specifications but which fail in a brittle manner at very low loads. This paper will present the authors efforts at optimizing both the electrical and mechanical properties of these composites. Modification of the matrix, fibers, and interface through proper material selection and processing can achieve the desired electrical characteristics while maintaining acceptable strength.
Optimization of a Continuous Hybrid Impeller Mixer via Computational Fluid Dynamics
Othman, N.; Kamarudin, S. K.; Takriff, M. S.; Rosli, M. I.; Engku Chik, E. M. F.; Meor Adnan, M. A. K.
2014-01-01
This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD) using computational fluid dynamics (CFD). In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT). Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD) was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF) was implemented and a steady state was initially achieved followed by a transient condition for RTD determination. PMID:25170524
Optimization of a continuous hybrid impeller mixer via computational fluid dynamics.
Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Meor Adnan, M A K
2014-01-01
This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD) using computational fluid dynamics (CFD). In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT). Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD) was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF) was implemented and a steady state was initially achieved followed by a transient condition for RTD determination. PMID:25170524
Lihoreau, Mathieu; Ings, Thomas C; Chittka, Lars; Reynolds, Andy M
2016-01-01
Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m(3) enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards. PMID:27459948
Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.
2016-01-01
Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards. PMID:27459948
NASA Astrophysics Data System (ADS)
Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.
2016-07-01
Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.
Jenny, Richard M; Jasper, Micah N; Simmons, Otto D; Shatalov, Max; Ducoste, Joel J
2015-10-15
Alternative disinfection sources such as ultraviolet light (UV) are being pursued to inactivate pathogenic microorganisms such as Cryptosporidium and Giardia, while simultaneously reducing the risk of exposure to carcinogenic disinfection by-products (DBPs) in drinking water. UV-LEDs offer a UV disinfecting source that do not contain mercury, have the potential for long lifetimes, are robust, and have a high degree of design flexibility. However, the increased flexibility in design options will add a substantial level of complexity when developing a UV-LED reactor, particularly with regards to reactor shape, size, spatial orientation of light, and germicidal emission wavelength. Anticipating that LEDs are the future of UV disinfection, new methods are needed for designing such reactors. In this research study, the evaluation of a new design paradigm using a point-of-use UV-LED disinfection reactor has been performed. ModeFrontier, a numerical optimization platform, was coupled with COMSOL Multi-physics, a computational fluid dynamics (CFD) software package, to generate an optimized UV-LED continuous flow reactor. Three optimality conditions were considered: 1) single objective analysis minimizing input supply power while achieving at least (2.0) log10 inactivation of Escherichia coli ATCC 11229; and 2) two multi-objective analyses (one of which maximized the log10 inactivation of E. coli ATCC 11229 and minimized the supply power). All tests were completed at a flow rate of 109 mL/min and 92% UVT (measured at 254 nm). The numerical solution for the first objective was validated experimentally using biodosimetry. The optimal design predictions displayed good agreement with the experimental data and contained several non-intuitive features, particularly with the UV-LED spatial arrangement, where the lights were unevenly populated throughout the reactor. The optimal designs may not have been developed from experienced designers due to the increased degrees of
NASA Technical Reports Server (NTRS)
Dunn, D.; Lusignan, B.
1972-01-01
A set of analytical capabilities that are needed to assess the role satellite communications technology will play in public and other services was developed. It is user oriented in that it starts from descriptions of user demand and develops the ability to estimate the cost of satisfying that demand with the lowest cost communications system. To ensure that the analysis could cope with the complexities of the real users, two services were chosen as examples, continuing professional education and medical services. Telecommunications costs are effected greatly by demographic factors, involving distribution of users in urban areas and distances between towns in rural regions. For this reason the analytical tools were exercised on sample locations. San Jose, California and Denver, Colorado were used to represent an urban area and the Rocky Mountain states were used to represent a rural region. In assessing the range of satellite system costs, two example coverage areas were considered, one appropriate to cover the contiguous forty-eight states, a second appropriate to cover about one-third that area.
Segmentation of bone structures in 3D CT images based on continuous max-flow optimization
NASA Astrophysics Data System (ADS)
Pérez-Carrasco, J. A.; Acha-Piñero, B.; Serrano, C.
2015-03-01
In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.
NASA Astrophysics Data System (ADS)
Barranco García, Javier; Gilardoni, Simone
2011-03-01
The proton beams used for the fixed target physics at the Super Proton Synchrotron (SPS) are extracted from the Proton Synchrotron (PS) by a multiturn technique called continuous transfer (CT). During the CT extraction, large losses are observed in locations where the machine aperture should be large enough to accommodate the circulating beam. This limits the maximum intensity deliverable due to the induced stray radiation outside the PS tunnel. Scattered particles from the interaction with the electrostatic septum are identified as the possible source of these losses. This article presents a detailed study aiming to understand the origin of losses and propose possible cures. The simulations could reproduce accurately the beam loss pattern measured in real machine operation and determine the beam shaving, intrinsic to the extraction process, as the cause for the unexpected losses. Since these losses are unavoidable, the proposed solution implies a new optics scheme displacing the losses to a region with better shielding. New simulations demonstrate the satisfactory performance of the new extraction optics and its suitability to be implemented in the machine. Finally, beam loss measurements in these new operation conditions confirmed the previous simulation results.
Nolph, K D; Keshaviah, P; Emerson, P; Van Stone, J C; Twardowski, Z J; Khanna, R; Moore, H L; Collins, A; Edward, A
1995-01-01
Recent studies suggest that the relationship of the net normalized protein catabolic rate (which is the normalized protein equivalent of nitrogen appearance [nPNA]) to the weekly clearance of urea normalized to total body water (Kt/V urea) in patients on continuous ambulatory peritoneal dialysis (CAPD) is curvilinear, rather than linear, as has been thought. The authors have reexamined the relationship of nPNA to weekly Kt/V urea in a CAPD population by cross-sectional analysis to see if the curvilinear definition of the relationship is as good as or better than the usual linear description. They also examined this relationship in the hemodialysis populations at the Dialysis Clinics Inc. in Columbia, Missouri, and in the Renal Kidney Disease Program in Minneapolis, Minnesota. It seems obvious that there should be a plateau of nPNA in each therapy because extension of linear regressions would predict protein intakes of normal individuals exceeding 8 g/kg/body weight/day. The authors compared their findings to other published results. Intuitively and analytically, the curvilinear relationships seem likely. The authors observed that the nPNA plateau is achieved at lower Kt/V in patients on CAPD than in those on hemodialysis, which is compatible with the peak concentration hypothesis. Asymptotes for CAPD and hemodialysis are similar. Weekly Kt/V urea requirements to achieve nPNA values at 95% of the asymptote are greater than those usually delivered. However, such nearly complete elimination of uremic appetite suppression may not be practical or necessary for achieving acceptable nutritional status and long-term survival in most patients. Optimum therapy may be well above adequate therapy relative to minimizing appetite suppression by uremia. PMID:8573843
Dutta, Samrat; Patchaikani, Prem Kumar; Behera, Laxmidhar
2016-07-01
This paper presents a single-network adaptive critic-based controller for continuous-time systems with unknown dynamics in a policy iteration (PI) framework. It is assumed that the unknown dynamics can be estimated using the Takagi-Sugeno-Kang fuzzy model with arbitrary precision. The successful implementation of a PI scheme depends on the effective learning of critic network parameters. Network parameters must stabilize the system in each iteration in addition to approximating the critic and the cost. It is found that the critic updates according to the Hamilton-Jacobi-Bellman formulation sometimes lead to the instability of the closed-loop systems. In the proposed work, a novel critic network parameter update scheme is adopted, which not only approximates the critic at current iteration but also provides feasible solutions that keep the policy stable in the next step of training by combining a Lyapunov-based linear matrix inequalities approach with PI. The critic modeling technique presented here is the first of its kind to address this issue. Though multiple literature exists discussing the convergence of PI, however, to the best of our knowledge, there exists no literature, which focuses on the effect of critic network parameters on the convergence. Computational complexity in the proposed algorithm is reduced to the order of (Fz)(n-1) , where n is the fuzzy state dimensionality and Fz is the number of fuzzy zones in the states space. A genetic algorithm toolbox of MATLAB is used for searching stable parameters while minimizing the training error. The proposed algorithm also provides a way to solve for the initial stable control policy in the PI scheme. The algorithm is validated through real-time experiment on a commercial robotic manipulator. Results show that the algorithm successfully finds stable critic network parameters in real time for a highly nonlinear system. PMID:26259150
NASA Astrophysics Data System (ADS)
Lin, Y. S.; Medlyn, B. E.; Duursma, R.; Prentice, I. C.; Wang, H.
2014-12-01
Stomatal conductance (gs) is a key land surface attribute as it links transpiration, the dominant component of global land evapotranspiration and a key element of the global water cycle, and photosynthesis, the driving force of the global carbon cycle. Despite the pivotal role of gs in predictions of global water and carbon cycles, a global scale database and an associated globally applicable model of gs that allow predictions of stomatal behaviour are lacking. We present a unique database of globally distributed gs obtained in the field for a wide range of plant functional types (PFTs) and biomes. We employed a model of optimal stomatal conductance to assess differences in stomatal behaviour, and estimated the model slope coefficient, g1, which is directly related to the marginal carbon cost of water, for each dataset. We found that g1 varies considerably among PFTs, with evergreen savanna trees having the largest g1 (least conservative water use), followed by C3 grasses and crops, angiosperm trees, gymnosperm trees, and C4 grasses. Amongst angiosperm trees, species with higher wood density had a higher marginal carbon cost of water, as predicted by the theory underpinning the optimal stomatal model. There was an interactive effect between temperature and moisture availability on g1: for wet environments, g1 was largest in high temperature environments, indicated by high mean annual temperature during the period when temperature above 0oC (Tm), but it did not vary with Tm across dry environments. We examine whether these differences in leaf-scale behaviour are reflected in ecosystem-scale differences in water-use efficiency. These findings provide a robust theoretical framework for understanding and predicting the behaviour of stomatal conductance across biomes and across PFTs that can be applied to regional, continental and global-scale modelling of productivity and ecohydrological processes in a future changing climate.
A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization
NASA Technical Reports Server (NTRS)
Foster, John V.; Cunningham, Kevin
2010-01-01
Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the
Eckermann, Simon; Willan, Andrew R
2013-05-01
Risk sharing arrangements relate to adjusting payments for new health technologies given evidence of their performance over time. Such arrangements rely on prospective information regarding the incremental net benefit of the new technology, and its use in practice. However, once the new technology has been adopted in a particular jurisdiction, randomized clinical trials within that jurisdiction are likely to be infeasible and unethical in the cases where they would be most helpful, i.e. with current evidence of positive while uncertain incremental health and net monetary benefit. Informed patients in these cases would likely be reluctant to participate in a trial, preferring instead to receive the new technology with certainty. Consequently, informing risk sharing arrangements within a jurisdiction is problematic given the infeasibility of collecting prospective trial data. To overcome such problems, we demonstrate that global trials facilitate trialling post adoption, leading to more complete and robust risk sharing arrangements that mitigate the impact of costs of reversal on expected value of information in jurisdictions who adopt while a global trial is undertaken. More generally, optimally designed global trials offer distinct advantages over locally optimal solutions for decision makers and manufacturers alike: avoiding opportunity costs of delay in jurisdictions that adopt; overcoming barriers to evidence collection; and improving levels of expected implementation. Further, the greater strength and translatability of evidence across jurisdictions inherent in optimal global trial design reduces barriers to translation across jurisdictions characteristic of local trials. Consequently, efficiently designed global trials better align the interests of decision makers and manufacturers, increasing the feasibility of risk sharing and the expected strength of evidence over local trials, up until the point that current evidence is globally sufficient. PMID:23529209
Geometric and electronic structure of mixed metal-semiconductor clusters from global optimization.-
NASA Astrophysics Data System (ADS)
Hagelberg, Frank; Wu, Jianhua
2006-03-01
In addition to pure metal and semiconductor clusters, hybrid species that contain both types of constituents occur at the metal-semiconductor interface. Thus, clusters of the form Cu(x)Si(y) were detected by mass spectrometry [1]. In this contribution, the geometric and energetic features of Me(m)Si(7-m) (Me=Cu and Li) clusters are discussed. The choice of these systems is motivated by the structural similarity of the pure Si(7), Li(7), and Cu(7) systems which all stabilize in D(5h) symmetry. On the other hand, Li and Cu, representing the alkali group (IA) and the noble metal group (IB) of the periodic system, are expected to display strongly differing behavior when integrated into a Si(n) cluster, resulting in different ground state geometries for the cases Me = Li and Me = Cu. Addressing this problem by means of geometry optimization requires, in view of the large number of possible atomic permutations for Me(m)Si(7-m) with 0 < m < 7, the use of a global search algorithm. Equilibrium geometries are obtained by simulated annealing within the Nose' thermostat frame. It is observed that Cu(m)Si(7-m) clusters with m < 6 tend towards ground state geometries derived from the D(5h) prototype. For Li(m)Si(7-m), the Li(m) subsystem is found to adsorb on the framework of the Si(7-m) dianion. [1] J.J. Scherer, J.B. Pau, C.P. Collier, A. O'Keefe, and R.J. Saykally, J. Chem. Phys. 103, 9187 (1995).
Developments of global greenhouse gas retrieval algorithm based on Optimal Estimation Method
NASA Astrophysics Data System (ADS)
Kim, W. V.; Kim, J.; Lee, H.; Jung, Y.; Boesch, H.
2013-12-01
After the industrial revolution, atmospheric carbon dioxide concentration increased drastically over the last 250 years. It is still increasing and over than 400ppm of carbon dioxide was measured at Mauna Loa observatory for the first time which value was considered as important milestone. Therefore, understanding the source, emission, transport and sink of global carbon dioxide is unprecedentedly important. Currently, Total Carbon Column Observing Network (TCCON) is operated to observe CO2 concentration by ground base instruments. However, the number of site is very few and concentrated to Europe and North America. Remote sensing of CO2 could supplement those limitations. Greenhouse Gases Observing SATellite (GOSAT) which was launched 2009 is measuring column density of CO2 and other satellites are planned to launch in a few years. GOSAT provide valuable measurement data but its low spatial resolution and poor success rate of retrieval due to aerosol and cloud, forced the results to cover less than half of the whole globe. To improve data availability, accurate aerosol information is necessary, especially for East Asia region where the aerosol concentration is higher than other region. For the first step, we are developing CO2 retrieval algorithm based on optimal estimation method with VLIDORT the vector discrete ordinate radiative transfer model. Proto type algorithm, developed from various combinations of state vectors to find best combination of state vectors, shows appropriate result and good agreement with TCCON measurements. To reduce calculation cost low-stream interpolation is applied for model simulation and the simulation time is drastically reduced. For the further study, GOSAT CO2 retrieval algorithm will be combined with accurate GOSAT-CAI aerosol retrieval algorithm to obtain more accurate result especially for East Asia.
Hustedt, E J; Smirnov, A I; Laub, C F; Cobb, C E; Beth, A H
1997-01-01
For immobilized nitroxide spin-labels with a well-defined interprobe geometry, resolved dipolar splittings can be observed in continuous wave electron paramagnetic resonance (CW-EPR) spectra for interelectron distances as large as 30 A using perdeuterated probes. In this work, algorithms are developed for calculating CW-EPR spectra of immobilized, dipolar coupled nitroxides, and then used to define the limits of sensitivity to the interelectron distance as a function of geometry and microwave frequency. Secondly, the CW-EPR spectra of N epsilon-spin-labeled coenzyme NAD+ bound to microcrystalline, tetrameric glyceraldehyde-3-phosphate dehydrogenase (GAPDH) have been collected at 9.8, 34, and 94 GHz. These data have been analyzed, using a combination of simulated annealing and global analysis, to obtain a unique fit to the data. The values of the intermitroxide distance and the five angles defining the relative orientation of the two nitroxides are in reasonable agreement with a molecular model built from the known crystal structure. Finally, the effect of rigid body isotropic rotational diffusion on the CW-EPR spectra of dipolar coupled nitroxides has been investigated using an algorithm based on Brownian dynamics trajectories. These calculations demonstrate the sensitivity of CW-EPR spectra to dipolar coupling in the presence of rigid body rotational diffusion. PMID:9083690
Rich, Benjamin; Moodie, Erica E M; Stephens, David A
2016-05-01
There have been considerable advances in the methodology for estimating dynamic treatment regimens, and for the design of sequential trials that can be used to collect unconfounded data to inform such regimens. However, relatively little attention has been paid to how such methodology could be used to advance understanding of optimal treatment strategies in a continuous dose setting, even though it is often the case that considerable patient heterogeneity in drug response along with a narrow therapeutic window may necessitate the tailoring of dosing over time. Such is the case with warfarin, a common oral anticoagulant. We propose novel, realistic simulation models based on pharmacokinetic-pharmacodynamic properties of the drug that can be used to evaluate potentially optimal dosing strategies. Our results suggest that this methodology can lead to a dosing strategy that performs well both within and across populations with different pharmacokinetic characteristics, and may assist in the design of randomized trials by narrowing the list of potential dosing strategies to those which are most promising. PMID:26537297
Sancho-Parramon, Jordi; Ferré-Borrull, Josep; Bosch, Salvador; Ferrara, Maria Christina
2003-03-01
We present a procedure for the optical characterization of thin-film stacks from spectrophotometric data. The procedure overcomes the intrinsic limitations arising in the numerical determination of many parameters from reflectance or transmittance spectra measurements. The key point is to use all the information available from the manufacturing process in a single global optimization process. The method is illustrated by a case study of solgel applications. PMID:12638889
Korenromp, Eline L.; Glaziou, Philippe; Fitzpatrick, Christopher; Floyd, Katherine; Hosseini, Mehran; Raviglione, Mario; Atun, Rifat; Williams, Brian
2012-01-01
Background The Global Plan to Stop TB estimates funding required in low- and middle-income countries to achieve TB control targets set by the Stop TB Partnership within the context of the Millennium Development Goals. We estimate the contribution and impact of Global Fund investments under various scenarios of allocations across interventions and regions. Methodology/Principal Findings Using Global Plan assumptions on expected cases and mortality, we estimate treatment costs and mortality impact for diagnosis and treatment for drug-sensitive and multidrug-resistant TB (MDR-TB), including antiretroviral treatment (ART) during DOTS for HIV-co-infected patients, for four country groups, overall and for the Global Fund investments. In 2015, China and India account for 24% of funding need, Eastern Europe and Central Asia (EECA) for 33%, sub-Saharan Africa (SSA) for 20%, and other low- and middle-income countries for 24%. Scale-up of MDR-TB treatment, especially in EECA, drives an increasing global TB funding need – an essential investment to contain the mortality burden associated with MDR-TB and future disease costs. Funding needs rise fastest in SSA, reflecting increasing coverage need of improved TB/HIV management, which saves most lives per dollar spent in the short term. The Global Fund is expected to finance 8–12% of Global Plan implementation costs annually. Lives saved through Global Fund TB support within the available funding envelope could increase 37% if allocations shifted from current regional demand patterns to a prioritized scale-up of improved TB/HIV treatment and secondly DOTS, both mainly in Africa − with EECA region, which has disproportionately high per-patient costs, funded from alternative resources. Conclusions/Significance These findings, alongside country funding gaps, domestic funding and implementation capacity and equity considerations, should inform strategies and policies for international donors, national governments and disease
NASA Astrophysics Data System (ADS)
Matott, L. S.; Gray, G. A.
2011-12-01
Pump-and-treat systems are a common strategy for groundwater remediation, wherein a system of extraction wells is installed at an affected site to address pollutant migration. In this context, the likely performance of candidate remedial systems is often assessed using groundwater flow modeling. When linked with an optimizer, these models can be utilized to identify a least-cost system design that nonetheless satisfies remediation goals. Moreover, the resulting design problems serve as important tools in the development and testing of optimization algorithms. For example, consider EAGLS (Evolutionary Algorithm Guiding Local Search), a recently developed derivative-free simulation-optimization code that seeks to efficiently solve nonlinear problems by hybridizing local and global search techniques. The EAGLS package was designed to specifically target mixed variable problems and has a limited ability to intelligently adapt its behavior to given problem characteristics. For instance, to solve problems in which there are no discrete or integer variables, the EAGLS code defaults to a multi-start asynchronous parallel pattern search. Therefore, to better understand the behavior of EAGLS, the algorithm was applied to a representative dual-plume pump-and-treat containment problem. A series of numerical experiments were performed involving four different formulations of the underlying pump-and-treat optimization problem, namely: (1) optimization of pumping rates, given fixed number of wells at fixed locations; (2) optimization of pumping rates and locations of a fixed number of wells; (3) optimization of pumping rates and number of wells at fixed locations; and (4) optimization of pumping rates, locations, and number of wells. Comparison of the performance of the EAGLS software with alternative search algorithms across different problem formulations yielded new insights for improving the EAGLS algorithm and enhancing its adaptive behavior.
Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal
ERIC Educational Resources Information Center
Steinley, Douglas; Hubert, Lawrence
2008-01-01
This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Chang, Ju-Ming
2010-05-01
This article presents a novel parallel multi-swarm optimization (PMSO) algorithm with the aim of enhancing the search ability of standard single-swarm PSOs for global optimization of very large-scale multimodal functions. Different from the existing multi-swarm structures, the multiple swarms work in parallel, and the search space is partitioned evenly and dynamically assigned in a weighted manner via the roulette wheel selection (RWS) mechanism. This parallel, distributed framework of the PMSO algorithm is developed based on a master-slave paradigm, which is implemented on a cluster of PCs using message passing interface (MPI) for information interchange among swarms. The PMSO algorithm handles multiple swarms simultaneously and each swarm performs PSO operations of its own independently. In particular, one swarm is designated for global search and the others are for local search. The first part of the experimental comparison is made among the PMSO, standard PSO, and two state-of-the-art algorithms (CTSS and CLPSO) in terms of various un-rotated and rotated benchmark functions taken from the literature. In the second part, the proposed multi-swarm algorithm is tested on large-scale multimodal benchmark functions up to 300 dimensions. The results of the PMSO algorithm show great promise in solving high-dimensional problems.
ERIC Educational Resources Information Center
Boyd, Donna J., Ed.
These proceedings record the addresses, concurrent sessions, and business meetings of the annual meeting of the Association for Continuing Higher Education (ACHE). Part 1 consists of three addresses: "World Collaboration for a Global Perspective" (Beverly Cassara); "When Chaos Is the Solution: A Paradigm for 21st Century Mandates and Strategies"…
NASA Astrophysics Data System (ADS)
Theos, F. V.; Lagaris, I. E.; Papageorgiou, D. G.
2004-05-01
We present two sequential and one parallel global optimization codes, that belong to the stochastic class, and an interface routine that enables the use of the Merlin/MCL environment as a non-interactive local optimizer. This interface proved extremely important, since it provides flexibility, effectiveness and robustness to the local search task that is in turn employed by the global procedures. We demonstrate the use of the parallel code to a molecular conformation problem. Program summaryTitle of program: PANMIN Catalogue identifier: ADSU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: PANMIN is designed for UNIX machines. The parallel code runs on either shared memory architectures or on a distributed system. The code has been tested on a SUN Microsystems ENTERPRISE 450 with four CPUs, and on a 48-node cluster under Linux, with both the GNU g77 and the Portland group compilers. The parallel implementation is based on MPI and has been tested with LAM MPI and MPICH Installation: University of Ioannina, Greece Programming language used: Fortran-77 Memory required to execute with typical data: Approximately O( n2) words, where n is the number of variables No. of bits in a word: 64 No. of processors used: 1 or many Has the code been vectorised or parallelized?: Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 147163 No. of lines in distributed program, including the test data, etc.: 14366 Distribution format: gzipped tar file Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be
Jarrar, Mu’taman; Rahman, Hamzah Abdul; Don, Mohammad Sobri
2016-01-01
Background and Objective: Demand for health care service has significantly increased, while the quality of healthcare and patient safety has become national and international priorities. This paper aims to identify the gaps and the current initiatives for optimizing the quality of care and patient safety in Malaysia. Design: Review of the current literature. Highly cited articles were used as the basis to retrieve and review the current initiatives for optimizing the quality of care and patient safety. The country health plan of Ministry of Health (MOH) Malaysia and the MOH Malaysia Annual Reports were reviewed. Results: The MOH has set four strategies for optimizing quality and sustaining quality of life. The 10th Malaysia Health Plan promotes the theme “1 Care for 1 Malaysia” in order to sustain the quality of care. Despite of these efforts, the total number of complaints received by the medico-legal section of the MOH Malaysia is increasing. The current global initiatives indicted that quality performance generally belong to three main categories: patient; staffing; and working environment related factors. Conclusions: There is no single intervention for optimizing quality of care to maintain patient safety. Multidimensional efforts and interventions are recommended in order to optimize the quality of care and patient safety in Malaysia. PMID:26755459
NASA Astrophysics Data System (ADS)
Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos
2015-04-01
In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.
Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2011-09-01
The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.
Mazumder, Jahirul; Zhu, Jingxu; Bassi, Amarjeet S; Ray, Ajay K
2009-08-01
Like most real-life processes, the operation of liquid-solid circulating fluidized bed (LSCFB) system for continuous protein recovery is associated with several objectives such as maximization of production rate and recovery of protein, and minimization of amount solid ion-exchange resin requirement, all of which need to be optimized simultaneously. In this article, multiobjective optimization of a LSCFB system for continuous protein recovery was carried out using an experimentally validated mathematical model to find the scope for further improvements in its operation. Elitist non-dominated sorting genetic algorithm with its jumping gene adaptation was used to solve a number of bi- and tri-objective function optimization problems. The optimization resulted in Pareto-optimal solution, which provides a broad range of non-dominated solutions due to conflicting behavior of the operating parameters on the system performance indicators. Significant improvements were achieved, for example, the production rate at optimal operation increased by 33%, using 11% less solid compared to reported experimental results for the same recovery level. The effects of operating variables on the optimal solutions are discussed in detail. The multiobjective optimization study reported here can be easily extended for the improvement of LSCFB system for other applications. PMID:19378264
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
NASA Astrophysics Data System (ADS)
Leaci, Paola; Prix, Reinhard
2015-05-01
We derive simple analytic expressions for the (coherent and semicoherent) phase metrics of continuous-wave sources in low-eccentricity binary systems for the two regimes of long and short segments compared to the orbital period. The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte Carlo studies comparing metric mismatch predictions against the measured loss of detection statistics for binary parameter offsets. The agreement is generally found to be within ˜10 %- 30 % . For an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque-balance level [R. V. Wagoner, Astrophys. J. 278, 345 (1984); L. Bildsten, Astrophys. J. 501, L89 (1998).] up to a frequency of ˜500 - 600 Hz , if orbital eccentricity is well constrained, and up to a frequency of ˜160 - 200 Hz for more conservative assumptions about the uncertainty on orbital eccentricity.
Ellison, Chad M.; Perricone, Matthew; Faraone, Kevin M. (Honeywell FM&T, Kansas City, MO); Roach, Robert Allen; Norris, Jerome T.
2007-02-01
Nd:YAG laser joining is a high energy density (HED) process that can produce high-speed, low-heat input welds with a high depth-to-width aspect ratio. This is optimized by formation of a ''keyhole'' in the weld pool resulting from high vapor pressures associated with laser interaction with the metallic substrate. It is generally accepted that pores form in HED welds due to the instability and frequent collapse of the keyhole. In order to maintain an open keyhole, weld pool forces must be balanced such that vapor pressure and weld pool inertia forces are in equilibrium. Travel speed and laser beam power largely control the way these forces are balanced, as well as welding mode (Continuous Wave or Square Wave) and shielding gas type. A study into the phenomenon of weld pool porosity in 304L stainless steel was conducted to better understand and predict how welding parameters impact the weld pool dynamics that lead to pore formation. This work is intended to aid in development and verification of a finite element computer model of weld pool fluid flow dynamics being developed in parallel efforts and assist in weld development activities for the W76 and future RRW programs.
NASA Astrophysics Data System (ADS)
Bentley, Julie L.; Olson, Craig; Youngworth, Richard N.
2010-11-01
Historically, a thorough grounding in aberration theory was the only path to successful lens design, both for developing starting layouts and for design improvement. Modern global optimizers, however, allow the lens designer to easily generate multiple solutions to a single design problem without understanding the crucial importance of aberrations and how they determine the full design potential. Compared to pure numerical optimization, aberration theory applied during the lens design process gives the designer a much firmer grasp of the overall design limitations and possibilities. Among other benefits, aberrations provide excellent insight into tolerance sensitivity and manufacturability of the underlying design form. We explore multiple examples of how applying aberration theory to lens design can improve the entire lens design process. Example systems include simple UV, visible, and IR refractive lenses; much more complicated refractive systems requiring field curvature balance; and broadband zoom lenses.
Ait Moussa, Abdellah; Jassemnejad, Bahaeddin
2014-05-01
Nanocomposites with high-aspect ratio fillers attract enormous attention because of the superior physical properties of the composite over the parent matrix. Nanocomposites with functionalized graphene as fillers did not produce the high thermal conductivity expected due to the high interfacial thermal resistance between the functional groups and graphene flakes. We report here a robust and efficient technique that identifies the configuration of the functionalities for improved thermal conductivity. The method combines linearization of the interatomic interactions, calculation, and optimization of the thermal conductivity using the globalized and bounded Nelder-Mead algorithm. PMID:25353920
NASA Astrophysics Data System (ADS)
Ait moussa, Abdellah; Jassemnejad, Bahaeddin
2014-05-01
Nanocomposites with high-aspect ratio fillers attract enormous attention because of the superior physical properties of the composite over the parent matrix. Nanocomposites with functionalized graphene as fillers did not produce the high thermal conductivity expected due to the high interfacial thermal resistance between the functional groups and graphene flakes. We report here a robust and efficient technique that identifies the configuration of the functionalities for improved thermal conductivity. The method combines linearization of the interatomic interactions, calculation, and optimization of the thermal conductivity using the globalized and bounded Nelder-Mead algorithm.
Ding, Yongsheng; Cheng, Lijun; Pedrycz, Witold; Hao, Kuangrong
2015-10-01
A new global nonlinear predictor with a particle swarm-optimized interval support vector regression (PSO-ISVR) is proposed to address three issues (viz., kernel selection, model optimization, kernel method speed) encountered when applying SVR in the presence of large data sets. The novel prediction model can reduce the SVR computing overhead by dividing input space and adaptively selecting the optimized kernel functions to obtain optimal SVR parameter by PSO. To quantify the quality of the predictor, its generalization performance and execution speed are investigated based on statistical learning theory. In addition, experiments using synthetic data as well as the stock volume weighted average price are reported to demonstrate the effectiveness of the developed models. The experimental results show that the proposed PSO-ISVR predictor can improve the computational efficiency and the overall prediction accuracy compared with the results produced by the SVR and other regression methods. The proposed PSO-ISVR provides an important tool for nonlinear regression analysis of big data. PMID:25974954
NASA Astrophysics Data System (ADS)
Ladefoged, Claes N.; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E.; Andersen, Flemming L.
2015-10-01
The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [18F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R2* values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.
Ladefoged, Claes N; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E; Andersen, Flemming L
2015-10-21
The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [(18)F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R(*)2 values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers. PMID:26422177
Multi-objective global optimization of a butterfly valve using genetic algorithms.
Corbera, Sergio; Olazagoitia, José Luis; Lozano, José Antonio
2016-07-01
A butterfly valve is a type of valve typically used for isolating or regulating flow where the closing mechanism takes the form of a disc. For a long time, the attention of many researchers has focused on carrying out structural (FEM) and computational fluid dynamics (CFD) analysis in order to increase the performance of this type of flow-control device. This paper proposes a novel multi-objective approach for the design optimization of a butterfly valve using advanced genetic algorithms based on Pareto dominance. Firstly, after defining the need for this study and analyzing previous papers on the subject, the initial butterfly valve is presented and the initial fluid and structural analysis are carried out. Secondly, the optimization problem is defined and the optimization strategy is presented. The design variables are identified and a parameterization model of the valve is made. Thirdly, initial design candidates are generated by DOE and design optimization using genetic algorithms is performed. In this part of the process structural and CFD analysis are calculated for each candidate simultaneously. The optimization process involves various types of software and Python scripts are needed for their interaction and the connection of all steps. Finally, a set of optimal solutions is obtained and the optimum design that provides a 65.4% stress reduction, a 5% mass reduction and a 11.3% flow increase is selected in accordance with manufacturer preferences. Validation of the results is provided by comparing experimental test results with the values obtained for the initial design. The results demonstrate the capability and potential of the proposed methodology. PMID:27056745
An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization
Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan
2013-01-01
A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Constraints
NASA Technical Reports Server (NTRS)
Hinckley, David; Englander, Jacob; Hitt, Darren
2015-01-01
Single trial evaluations Trial creation by Phase-wise GA-style or DE-inspired recombination Bin repository structure requires an initialization period Non-exclusionary Kill Distance Population collapse mechanic Main loop Creation Probabilistic switch between GA and DE creation types Locally optimize Submit to repository Repeat.
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
NASA Astrophysics Data System (ADS)
Naser, Samer Fahim
The design of an extractant molecule for use in liquid-liquid extraction, traditionally a combinatorial optimization problem, has been solved using continuous optimization. UNIFAC, a thermodynamic group contribution method which allows the calculation of an activity coefficient of a component from its chemical structure, was used as the basis for all calculations. A computer system was developed which employs a three step procedure. First, the error in the liquid-liquid equilibrium relations resulting from the specification of a target separation criteria is minimized by continuously varying the functional groups in the design group pool. Second, the theoretical molecule obtained from the first step is used as a starting point to optimize up to seven separation criteria by variation of functional groups and mole fractions to obtain the optimum theoretical extractant molecule which satisfies the equilibrium relations. Third, the theoretical molecule is used to generate alternative extractant molecules which contain integer functional group values only. Numeric molecular structure constraints were developed which help maintain the feasibility of molecules in the first two steps, and allow the rejection of infeasible molecules in the third step. These constraints include limits on boiling point and molecular weight. The system developed was successfully tested on several separation problems and has suggested extractants as good or better than ones currently in use. This is the first reported use of continuous optimization in molecular design. For large design pools, this approach, as opposed to combinatorial optimization, is several orders of magnitude faster.
Leverrier, Anthony; Grangier, Philippe
2010-06-15
In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
NASA Astrophysics Data System (ADS)
Chevrot, Sébastien; Martin, Roland; Komatitsch, Dimitri
2012-12-01
Wavelets are extremely powerful to compress the information contained in finite-frequency sensitivity kernels and tomographic models. This interesting property opens the perspective of reducing the size of global tomographic inverse problems by one to two orders of magnitude. However, introducing wavelets into global tomographic problems raises the problem of computing fast wavelet transforms in spherical geometry. Using a Cartesian cubed sphere mapping, which grids the surface of the sphere with six blocks or 'chunks', we define a new algorithm to implement fast wavelet transforms with the lifting scheme. This algorithm is simple and flexible, and can handle any family of discrete orthogonal or bi-orthogonal wavelets. Since wavelet coefficients are local in space and scale, aliasing effects resulting from a parametrization with global functions such as spherical harmonics are avoided. The sparsity of tomographic models expanded in wavelet bases implies that it is possible to exploit the power of compressed sensing to retrieve Earth's internal structures optimally. This approach involves minimizing a combination of a ℓ2 norm for data residuals and a ℓ1 norm for model wavelet coefficients, which can be achieved through relatively minor modifications of the algorithms that are currently used to solve the tomographic inverse problem.
ERIC Educational Resources Information Center
Kapur, Nitin A.; Windish, Donna M.
2011-01-01
Contradictory data exist regarding optimal methods and instruments for intimate partner violence (IPV) screening in primary care settings. The purpose of this study was to determine the optimal method and screening instrument for IPV among men and women in a primary-care resident clinic. We conducted a cross-sectional study at an urban, academic,…
Laguzet, Laetitia; Turinici, Gabriel
2015-05-01
This work focuses on optimal vaccination policies for an Susceptible-Infected-Recovered (SIR) model; the impact of the disease is minimized with respect to the vaccination strategy. The problem is formulated as an optimal control problem and we show that the value function is the unique viscosity solution of an Hamilton-Jacobi-Bellman (HJB) equation. This allows to find the best vaccination policy. At odds with existing literature, it is seen that the value function is not always smooth (sometimes only Lipschitz) and the optimal vaccination policies are not unique. Moreover we rigorously analyze the situation when vaccination can be modeled as instantaneous (with respect to the time evolution of the epidemic) and identify the global optimum solutions. Numerical applications illustrate the theoretical results. In addition the pertussis vaccination in adults is considered from two perspectives: first the maximization of DALY averted in presence of vaccine side-effects; then the impact of the herd immunity on the cost-effectiveness analysis is discussed on a concrete example. PMID:25771436
Loizzo, Joseph
2009-08-01
This overview surveys the new optimism about the aging mind/brain, focusing on the potential for self-regulation practices to advance research in stress-protection and optimal health. It reviews recent findings and offers a research framework. The review links the age-related biology of stress and regeneration to the variability of mind/brain function found under a range of conditions from trauma to enrichment. The framework maps this variation along a biphasic continuum from atrophic dysfunction to peak performance. It adopts the concept of allostatic load as a measure of the wear-and-tear caused by stress, and environmental enrichment as a measure of the use-dependent enhancement caused by positive reinforcement. It frames the dissociation, aversive affect and stereotyped reactions linked with stress as cognitive, affective and behavioral forms of allostatic drag; and the association, positive affect, and creative responses in enrichment as forms of allostatic lift. It views the human mind/brain as a heterarchy of higher intelligence systems that shift between a conservative, egocentric mode heightening self-preservation and memory and a generative, altruistic mode heightening self-correction and learning. Cultural practices like meditation and psychotherapy work by teaching the self-regulation of shifts from the conservative to the generative mode. This involves a systems shift from allostatic drag to allostatic lift, minimizing wear-and-tear and optimizing plasticity and learning. For cultural practices to speed research and application, a universal typology is needed. This framework includes a typology aligning current brain models of stress and learning with traditional Indo-Tibetan models of meditative stress-cessation and learning enrichment. PMID:19743554
Optimization of semi-global stereo matching for hardware module implementation
NASA Astrophysics Data System (ADS)
Roszkowski, Mikołaj
2014-11-01
Stereo vision is one of the most intensively studied areas in the field of computer vision. It allows the creation of a 3D model of a scene given two images of the scene taken with optical cameras. Although the number of stereo algorithms keeps increasing, not many are suitable candidates for hardware implementations that could guarantee real-time processing in embedded systems. One of such algorithms is semi-global matching, which seems to balance well the quality of the disparity map and computational complexity. However, it still has quite high memory requirements, which can be a problem if the low-cost FPGAs are to be used. This is because they often suffer from a low external DRAM memory throughput. In this article, a few methods to reduce both the semi-global matching algorithm complexity and memory usage, and thus required bandwidth, are proposed. First of all, it is shown that a simple pyramid matching scheme can be used to efficiently reduce the number of disparities checked per pixel. Secondly, a method of dividing the image into independent blocks is proposed, which allows the reduction of the amount of memories required by the algorithm. Finally the exact requirements for the bandwidth and the size of the on-chip memories are given.
Optimal integer resolution for attitude determination using global positioning system signals
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn
1998-01-01
In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.
Bayesian Segmentation of Atrium Wall Using Globally-Optimal Graph Cuts on 3D Meshes
Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P.; Whitaker, Ross T.
2014-01-01
Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI. PMID:24684007
ERIC Educational Resources Information Center
Rafiq, Azhar; Merrell, Ronald C.
2005-01-01
Health care practices continue to evolve with technological advances integrating computer applications and patient information management into telemedicine systems. Telemedicine can be broadly defined as the use of information technology to provide patient care and share clinical information from one geographic location to another. Telemedicine…
Pillardy, Jarosław; Czaplewski, Cezary; Liwo, Adam; Lee, Jooyoung; Ripoll, Daniel R.; Kaźmierkiewicz, Rajmund; Ołdziej, Stanisław; Wedemeyer, William J.; Gibson, Kenneth D.; Arnautova, Yelena A.; Saunders, Jeff; Ye, Yuan-Jie; Scheraga, Harold A.
2001-01-01
Recent improvements of a hierarchical ab initio or de novo approach for predicting both α and β structures of proteins are described. The united-residue energy function used in this procedure includes multibody interactions from a cumulant expansion of the free energy of polypeptide chains, with their relative weights determined by Z-score optimization. The critical initial stage of the hierarchical procedure involves a search of conformational space by the conformational space annealing (CSA) method, followed by optimization of an all-atom model. The procedure was assessed in a recent blind test of protein structure prediction (CASP4). The resulting lowest-energy structures of the target proteins (ranging in size from 70 to 244 residues) agreed with the experimental structures in many respects. The entire experimental structure of a cyclic α-helical protein of 70 residues was predicted to within 4.3 Å α-carbon (Cα) rms deviation (rmsd) whereas, for other α-helical proteins, fragments of roughly 60 residues were predicted to within 6.0 Å Cα rmsd. Whereas β structures can now be predicted with the new procedure, the success rate for α/β- and β-proteins is lower than that for α-proteins at present. For the β portions of α/β structures, the Cα rmsd's are less than 6.0 Å for contiguous fragments of 30–40 residues; for one target, three fragments (of length 10, 23, and 28 residues, respectively) formed a compact part of the tertiary structure with a Cα rmsd less than 6.0 Å. Overall, these results constitute an important step toward the ab initio prediction of protein structure solely from the amino acid sequence. PMID:11226239
ERIC Educational Resources Information Center
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-01-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The…
Ona, Ofelia; Facelli, Julio C.; Bazterra, Victor E.; Caputo, Maria C.; Ferraro, Marta B.
2005-11-15
The results of ab initio global optimizations of the structures of Si{sub n}H, n=4-10, atomic clusters using a parallel genetic algorithm are presented. Driving the global search with the parallel implementation of the genetic algorithm are presented and using the density functional theory as implemented in the Carr-Parinello molecular dynamics code to calculate atomic cluster energies and perform the local optimization of their structures, we have been able to demonstrate that it is possible to perform global optimizations of the structure of atomic clusters using ab initio methods. The results show that this approach is able to find many structures that were not previously reported in the literature. Moreover, in most cases the new structures have considerable lower energies than those previously known. The results clearly demonstrate that these calculations are now possible and in spite of their larger computational demands provide more reliable results.
Sabesan, Shivkumar; Chakravarthy, Niranjan; Tsakalis, Kostas; Pardalos, Panos; Iasemidis, Leon
2009-01-01
Epileptic seizures are manifestations of intermittent spatiotemporal transitions of the human brain from chaos to order. Measures of chaos, namely maximum Lyapunov exponents (STL(max)), from dynamical analysis of the electroencephalograms (EEGs) at critical sites of the epileptic brain, progressively converge (diverge) before (after) epileptic seizures, a phenomenon that has been called dynamical synchronization (desynchronization). This dynamical synchronization/desynchronization has already constituted the basis for the design and development of systems for long-term (tens of minutes), on-line, prospective prediction of epileptic seizures. Also, the criterion for the changes in the time constants of the observed synchronization/desynchronization at seizure points has been used to show resetting of the epileptic brain in patients with temporal lobe epilepsy (TLE), a phenomenon that implicates a possible homeostatic role for the seizures themselves to restore normal brain activity. In this paper, we introduce a new criterion to measure this resetting that utilizes changes in the level of observed synchronization/desynchronization. We compare this criterion's sensitivity of resetting with the old one based on the time constants of the observed synchronization/desynchronization. Next, we test the robustness of the resetting phenomena in terms of the utilized measures of EEG dynamics by a comparative study involving STL(max), a measure of phase (ϕ(max)) and a measure of energy (E) using both criteria (i.e. the level and time constants of the observed synchronization/desynchronization). The measures are estimated from intracranial electroencephalographic (iEEG) recordings with subdural and depth electrodes from two patients with focal temporal lobe epilepsy and a total of 43 seizures. Techniques from optimization theory, in particular quadratic bivalent programming, are applied to optimize the performance of the three measures in detecting preictal entrainment. It is
NASA Astrophysics Data System (ADS)
Selva, D.
2014-10-01
Requirements from the different disciplines of the Earth sciences on satellite missions have become considerably more stringent in the past decade, while budgets in space organizations have not increased to support the implementation of new systems meeting these requirements. At the same time, new technologies such as optical communications, electrical propulsion, nanosatellite technology, and new commercial agents and models such as hosted payloads are now available. The technical and programmatic environment is thus ideal to conduct architectural studies that look with renewed breadth and adequate depth to the myriad of new possible architectures for Earth Observing Systems. Such studies are challenging tasks, since they require formidable amounts of data and expert knowledge in order to be conducted. Indeed, trade-offs between hundreds or thousands of requirements from different disciplines need to be considered, and millions of combinations of instrument technologies and orbits are possible. This paper presents a framework and tool to support the exploration of such large architectural tradespaces. The framework can be seen as a model-based, executable science traceability matrix that can be used to compare the relative value of millions of different possible architectures. It is demonstrated with an operational climate-centric case study. Ultimately, this framework can be used to assess opportunities for international collaboration and look at architectures for a global Earth observing system, including space, air, and ground assets.
Guiding automated NMR structure determination using a global optimization metric, the NMR DP score.
Huang, Yuanpeng Janet; Mao, Binchen; Xu, Fei; Montelione, Gaetano T
2015-08-01
ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD-NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases (15)N-(1)H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD-NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta. PMID:26081575
Optimal ?-Control for the Global Cauchy Problem of The Relativistic Vlasov-Poisson System
NASA Astrophysics Data System (ADS)
Young, Brent
2011-12-01
Recently, M.K.-H. Kiessling and A.S. Tahvildar-Zadeh proved that a unique global classical solution to the relativistic Vlasov-Poisson system exists whenever the positive, integrable initial datum is spherically symmetric, compactly supported in momentum space, vanishes on characteristics with vanishing angular momentum, and for β⩾3/2 has ?-norm strictly below a positive, critical value ?. Everything else being equal, data leading to finite time blow-up can be found with ?-norm surpassing ? for any β>1, with ? if and only if β⩾3/2. In their paper, the critical value for β=3/2 is calculated explicitly while the value for all other β is merely characterized as the infimum of a functional over an appropriate function space. In this work, the existence of minimizers is established, and the exact expression of ? is calculated in terms of the famous Lane-Emden functions. Numerical computations of the ? are presented along with some elementary asymptotics near the critical exponent 3/2.
The q-G method : A q-version of the Steepest Descent method for global optimization.
Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M
2015-01-01
In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems. PMID:26543781
NASA Astrophysics Data System (ADS)
Cai, X.; Zhang, X.; Zhu, T.
2014-12-01
Global food security is constrained by local and regional land and water availability, as well as other agricultural input limitations and inappropriate national and global regulations. In a theoretical context, this study assumes that optimal water and land uses in local food production to maximize food security and social welfare at the global level can be driven by global trade. It follows the context of "virtual resources trade", i.e., utilizing international trade of agricultural commodities to reduce dependency on local resources, and achieves land and water savings in the world. An optimization model based on the partial equilibrium of agriculture is developed for the analysis, including local commodity production and land and water resources constraints, demand by country, and global food market. Through the model, the marginal values (MVs) of social welfare for water and land at the level of so-called food production units (i.e., sub-basins with similar agricultural production conditions) are derived and mapped in the world. In this personation, we will introduce the model structure, explain the meaning of MVs at the local level and their distribution around the world, and discuss the policy implications for global communities to enhance global food security. In particular, we will examine the economic values of water and land under different world targets of food security (e.g., number of malnourished population or children in a future year). In addition, we will also discuss the opportunities on data to improve such global modeling exercises.
Skilton, Ryan A; Parrott, Andrew J; George, Michael W; Poliakoff, Martyn; Bourne, Richard A
2013-10-01
The use of automated continuous flow reactors is described, with real-time online Fourier transform infrared spectroscopy (FT-IR) analysis to enable rapid optimization of reaction yield using a self-optimizing feedback algorithm. This technique has been applied to the solvent-free methylation of 1-pentanol with dimethyl carbonate using a γ-alumina catalyst. Calibration of the FT-IR signal was performed using gas chromatography to enable quantification of yield over a wide variety of flow rates and temperatures. The use of FT-IR as a real-time analytical technique resulted in an order of magnitude reduction in the time and materials required compared to previous studies. This permitted a wide exploration of the parameter space to provide process understanding and validation of the optimization algorithms. PMID:24067568
William J. Gutowski; Joseph M. Prusa, Piotr K. Smolarkiewicz
2012-04-09
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the 'physics' of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2016-04-21
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
NASA Astrophysics Data System (ADS)
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi
2016-04-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.
NASA Astrophysics Data System (ADS)
Clifford, S. M.; Delemere, W.; Gogineni, P. S.
2011-12-01
The MARSIS and SHARAD orbital radar sounders have given tantalizing glimpses of subsurface fof Mars. But, to be accommodated aboard spacecraft with a number of other high-level investigations, MARSIS and SHARAD had to accept some compromises in instrument design and operation that have limited their potential capabilities. Here we describe a proposal for a new Mars orbital radar mission, the Mars Global Subsurface Sounder (MGSS), that is solely dedicated to sub-surface sounding, allowing it to achieve maximum spatial resolution and penetration depth through an optimized orbit, antenna design, increased power and significantly improved signal to noise ratio. The chief science goals of this mission are to investigate the stratigraphic and structural evolution of the Martian subsurface and polar layered deposits (PLD), as well as the distribution and state of subsurface water (whether as a liquid or as massive ice deposits) through the acquisition of a 3-D radar map to depths ranging from 1 km, in lithic environments, and up to 4 km in the PLD. The MARSIS and SHARAD radar investigations have provided clear demonstrations of the capabilities of deep-sounding radar to conduct similar investigations. The MGSS is expected to significantly improve on this performance by taking advantage a spacecraft and mission optimized for radar sounding. Over its 2-year mission duration, the MGSS will be able to compile a global 3-D map of local variations in dielectric properties, with a horizontal resolution of ~1 km and vertical resolution of ~10-20 m MGSS is a dual-band radar sounder that operates at 1-6 MHz and 15-25 MHz. 2-D SAR processing is used to maximize both along and cross track resolution and clutter suppression, while onboard along track processing minimizes the downlink data rate. The spacecraft has sufficient mass margin to incorporate sufficient shielding minimize signal degradation by electromagnetic interference and maximize the signal to noise ratio. The orbit of
Caproni, A.; Toffoli, R. T.; Monteiro, H.; Abraham, Z.; Teixeira, D. M.
2011-07-20
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N{sub s} elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e.g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting
Bussamara, Roberta; Dall'agnol, Luciane; Schrank, Augusto; Fernandes, Kátia Flávia; Vainstein, Marilene Henning
2012-01-01
This study aimed to develop an optimal continuous process for lipase immobilization in a bed reactor in order to investigate the possibility of large-scale production. An extracellular lipase of Pseudozyma hubeiensis (strain HB85A) was immobilized by adsorption onto a polystyrene-divinylbenzene support. Furthermore, response surface methodology (RSM) was employed to optimize enzyme immobilization and evaluate the optimum temperature and pH for free and immobilized enzyme. The optimal immobilization conditions observed were 150 min incubation time, pH 4.76, and an enzyme/support ratio of 1282 U/g support. Optimal activity temperature for free and immobilized enzyme was found to be 68°C and 52°C, respectively. Optimal activity pH for free and immobilized lipase was pH 4.6 and 6.0, respectively. Lipase immobilization resulted in improved enzyme stability in the presence of nonionic detergents, at high temperatures, at acidic and neutral pH, and at high concentrations of organic solvents such as 2-propanol, methanol, and acetone. PMID:22315670
Bussamara, Roberta; Dall'Agnol, Luciane; Schrank, Augusto; Fernandes, Kátia Flávia; Vainstein, Marilene Henning
2012-01-01
This study aimed to develop an optimal continuous process for lipase immobilization in a bed reactor in order to investigate the possibility of large-scale production. An extracellular lipase of Pseudozyma hubeiensis (strain HB85A) was immobilized by adsorption onto a polystyrene-divinylbenzene support. Furthermore, response surface methodology (RSM) was employed to optimize enzyme immobilization and evaluate the optimum temperature and pH for free and immobilized enzyme. The optimal immobilization conditions observed were 150 min incubation time, pH 4.76, and an enzyme/support ratio of 1282 U/g support. Optimal activity temperature for free and immobilized enzyme was found to be 68°C and 52°C, respectively. Optimal activity pH for free and immobilized lipase was pH 4.6 and 6.0, respectively. Lipase immobilization resulted in improved enzyme stability in the presence of nonionic detergents, at high temperatures, at acidic and neutral pH, and at high concentrations of organic solvents such as 2-propanol, methanol, and acetone. PMID:22315670
DBD reactor design and optimization in continuous AP-PECVD from HMDSO/N2/N2O mixture
NASA Astrophysics Data System (ADS)
Hotmar, Petr; Caquineau, Hubert; Cozzolino, Raphaël; Gherardi, Nicolas
2016-02-01
Dielectric barrier discharge (DBD) deposition of thin films is increasingly studied as a promising alternative to other non-thermal processes such as low-pressure plasma-enhanced chemical vapor deposition (PECVD) or wet-coating. In this paper we demonstrate how optimizing gas injection in the DBD results in an improvement in the reactor performance. We propose to confine the precursor gas close to the deposition substrate by an additional gas flow. The performance of this design is studied though simulation of mass transport. To optimize the deposited thickness, gas cost and reactor clogging, we assess the influence of the confinement, total gas flow rate and DBD length. The confinement is found to reduce reactor clogging, even for long DBD, and increase the deposit thickness. This increase in thickness requires a proportionate increase in the gas flow-rate, making the gas-cost the main limitation of the proposed design. We show, however, that by fine-tuning the operating conditions a beneficial compromise can be obtained between the three optimization objectives.
Häfele, W; Sassin, W
1979-05-01
A global energy system is conceptualized and analyzed, the energy distributor sub-system of the worldwide supranational system. Its many interconnections are examined and traced back to their source to determine the major elements of this global energy system. Long-term trends are emphasized. The analysis begins with a discussion of the local systems that resulted from the deployment of technology in the mid-nineteenth century, continues with a description of the global system based on oil that has existed for the past two decades, and ends with a scenario implying that an energy transition will occur in the future in which use of coal, nuclear, and solar energy will predominate. A major problem for the future will be the management of this energy transition. The optimal use of global resources and the efficient management of this transition will require a stable and persistent global order. PMID:464990
NASA Astrophysics Data System (ADS)
Simpson, J. J.; Taflove, A.
2005-12-01
We report a finite-difference time-domain (FDTD) computational solution of Maxwell's equations [1] that models the possibility of detecting and characterizing ionospheric disturbances above seismic regions. Specifically, we study anomalies in Schumann resonance spectra in the extremely low frequency (ELF) range below 30 Hz as observed in Japan caused by a hypothetical cylindrical ionospheric disturbance above Taiwan. We consider excitation of the global Earth-ionosphere waveguide by lightning in three major thunderstorm regions of the world: Southeast Asia, South America (Amazon region), and Africa. Furthermore, we investigate varying geometries and characteristics of the ionospheric disturbance above Taiwan. The FDTD technique used in this study enables a direct, full-vector, three-dimensional (3-D) time-domain Maxwell's equations calculation of round-the-world ELF propagation accounting for arbitrary horizontal as well as vertical geometrical and electrical inhomogeneities and anisotropies of the excitation, ionosphere, lithosphere, and oceans. Our entire-Earth model grids the annular lithosphere-atmosphere volume within 100 km of sea level, and contains over 6,500,000 grid-points (63 km laterally between adjacent grid points, 5 km radial resolution). We use our recently developed spherical geodesic gridding technique having a spatial discretization best described as resembling the surface of a soccer ball [2]. The grid is comprised entirely of hexagonal cells except for a small fixed number of pentagonal cells needed for completion. Grid-cell areas and locations are optimized to yield a smoothly varying area difference between adjacent cells, thereby maximizing numerical convergence. We compare our calculated results with measured data prior to the Chi-Chi earthquake in Taiwan as reported by Hayakawa et. al. [3]. Acknowledgement This work was suggested by Dr. Masashi Hayakawa, University of Electro-Communications, Chofugaoka, Chofu Tokyo. References [1] A
NASA Astrophysics Data System (ADS)
Carroll, Rosemary W. H.; Pohll, Greg M.; Earman, Sam; Hershey, Ronald L.
2007-10-01
SummaryAs part of a larger study to estimate groundwater recharge volumes in the area of the eastern Nevada Test Site (NTS), [Campana, M.E., 1975. Finite-state models of transport phenomena in hydrologic systems, PhD Dissertation: University of Arizona, Tucson] Discrete-state compartment model (DSCM) was re-coded to simulate steady-state groundwater concentrations of a conservative tracer. It was then dynamically linked with the shuffled complex evolution (SCE) optimization algorithm [Duan, Q., Soroosh, S., Gupta, V., 1992. Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resources Research 28(4), 1015-1031] in which both flow direction and magnitude were adjusted to minimize errors in predicted tracer concentrations. Code validation on a simple four-celled model showed the algorithm consistent in model predictions and capable of reproducing expected cell outflows with relatively little error. The DSCM-SCE code was then applied to a 15-basin (cell) eastern NTS model developed for the DSCM. Auto-calibration of the NTS model was run given two modeling scenarios, (a) assuming known groundwater flow directions and solving only for magnitudes and, (b) solving for groundwater flow directions and magnitudes. The SCE is a fairly robust algorithm, unlike simulated annealing or modified Gauss-Newton approaches. The DSCM-SCE improves upon its original counterpart by being more user-friendly and by auto-calibrating complex models in minutes to hours. While the DSCM-SCE can provide numerical support to a working hypothesis, it can not definitively define a flow system based solely on δD values given few hydrogeologic constraints on boundary conditions and cell-to-cell interactions.
NASA Astrophysics Data System (ADS)
Chen, Fang; Chang, Honglong; Yuan, Weizheng; Wilcock, Reuben; Kraft, Michael
2012-10-01
This paper describes a novel multiobjective parameter optimization method based on a genetic algorithm (GA) for the design of a sixth-order continuous-time, force feedback band-pass sigma-delta modulator (BP-ΣΔM) interface for the sense mode of a MEMS gyroscope. The design procedure starts by deriving a parameterized Simulink model of the BP-ΣΔM gyroscope interface. The system parameters are then optimized by the GA. Consequently, the optimized design is tested for robustness by a Monte Carlo analysis to find a solution that is both optimal and robust. System level simulations result in a signal-to-noise ratio (SNR) larger than 90 dB in a bandwidth of 64 Hz with a 200° s-1 angular rate input signal; the noise floor is about -100 dBV Hz-1/2. The simulations are compared to measured data from a hardware implementation. For zero input rotation with the gyroscope operating at atmospheric pressure, the spectrum of the output bitstream shows an obvious band-pass noise shaping and a deep notch at the gyroscope resonant frequency. The noise floor of measured power spectral density (PSD) of the output bitstream agrees well with simulation of the optimized system level model. The bias stability, rate sensitivity and nonlinearity of the gyroscope controlled by an optimized BP-ΣΔM closed-loop interface are 34.15° h-1, 22.3 mV °-1 s-1, 98 ppm, respectively. This compares to a simple open-loop interface for which the corresponding values are 89° h-1, 14.3 mV °-1 s-1, 7600 ppm, and a nonoptimized BP-ΣΔM closed-loop interface with corresponding values of 60° h-1, 17 mV °-1 s-1, 200 ppm.
Arai, Hidenao; Nishigaki, Koichi; Nemoto, Naoto; Suzuki, Miho; Husimi, Yuzuru
2014-01-01
The norovirus RNA replicase (NV3D(pol), 56 kDa, single chain monomeric protein) can amplify double-stranded (ds) RNA isothermally. It will play an alternative role in the in vitro evolution against traditional Qβ RNA replicase, which cannot amplify dsRNA and consists of four subunits, three of which are borrowed from host E.coli. In order to identify the optimal 3'-terminal sequence of the RNA template for NV3D(pol), an in vitro selection using the serial transfer was performed for a random library having the 3'-terminal sequence of ---UUUUUUNNNN-3'. The population landscape on the 4-dimensional sequence space of the 17(th) round of transfer gave a main peak around ---CAAC-3'. In the preceding studies on the batch amplification reaction starting from a single-stranded RNA, a template with 3'-terminal C-stretch was amplified effectively. It was confirmed that in the batch amplification the ---CCC-3' was much more effective than the ---CAAC-3', but in the serial transfer condition in which the ----CAAC-3' was sustained stably, the ---CCC-3' was washed out. Based on these results we proposed the existence of the "shuttle mode" replication of dsRNA. We also proposed the optimal terminal sequences of RNA for in vitro evolution with NV3D(pol). PMID:27493494
Arai, Hidenao; Nishigaki, Koichi; Nemoto, Naoto; Suzuki, Miho; Husimi, Yuzuru
2014-01-01
The norovirus RNA replicase (NV3Dpol, 56 kDa, single chain monomeric protein) can amplify double-stranded (ds) RNA isothermally. It will play an alternative role in the in vitro evolution against traditional Qβ RNA replicase, which cannot amplify dsRNA and consists of four subunits, three of which are borrowed from host E.coli. In order to identify the optimal 3′-terminal sequence of the RNA template for NV3Dpol, an in vitro selection using the serial transfer was performed for a random library having the 3′-terminal sequence of ---UUUUUUNNNN-3′. The population landscape on the 4-dimensional sequence space of the 17th round of transfer gave a main peak around ---CAAC-3′. In the preceding studies on the batch amplification reaction starting from a single-stranded RNA, a template with 3′-terminal C-stretch was amplified effectively. It was confirmed that in the batch amplification the ---CCC-3′ was much more effective than the ---CAAC-3′, but in the serial transfer condition in which the ----CAAC-3′ was sustained stably, the ---CCC-3′ was washed out. Based on these results we proposed the existence of the “shuttle mode” replication of dsRNA. We also proposed the optimal terminal sequences of RNA for in vitro evolution with NV3Dpol. PMID:27493494
NASA Technical Reports Server (NTRS)
Zimetbaum, P. J.; Kim, K. Y.; Josephson, M. E.; Goldberger, A. L.; Cohen, D. J.
1998-01-01
BACKGROUND: Continuous-loop event recorders are widely used for the evaluation of palpitations, but the optimal duration of monitoring is unknown. OBJECTIVE: To determine the yield, timing, and incremental cost-effectiveness of each week of event monitoring for palpitations. DESIGN: Prospective cohort study. PATIENTS: 105 consecutive outpatients referred for the placement of a continuous-loop event recorder for the evaluation of palpitations. MEASUREMENTS: Diagnostic yield, incremental cost, and cost-effectiveness for each week of monitoring. RESULTS: The diagnostic yield of continuous-loop event recorders was 1.04 diagnoses per patient in week 1, 0.15 diagnoses per patient in week 2, and 0.01 diagnoses per patient in week 3 and beyond. Over time, the cost-effectiveness ratio increased from $98 per new diagnosis in week 1 to $576 per new diagnosis in week 2 and $5832 per new diagnosis in week 3. CONCLUSIONS: In patients referred for evaluation of palpitations, the diagnostic yield of continuous-loop event recording decreases rapidly after 2 weeks of monitoring. A 2-week monitoring period is reasonably cost-effective for most patients and should be the standard period for continuous-loop event recording for the evaluation of palpitations.
Tsetsarkin, Konstantin A; Chen, Rubing; Yun, Ruimei; Rossi, Shannan L; Plante, Kenneth S; Guerbois, Mathilde; Forrester, Naomi; Perng, Guey Chuen; Sreekumar, Easwaran; Leal, Grace; Huang, Jing; Mukhopadhyay, Suchetana; Weaver, Scott C
2014-01-01
Host species-specific fitness landscapes largely determine the outcome of host switching during pathogen emergence. Using chikungunya virus (CHIKV) to study adaptation to a mosquito vector, we evaluated mutations associated with recently evolved sub-lineages. Multiple Aedes albopictus-adaptive fitness peaks became available after CHIKV acquired an initial adaptive (E1-A226V) substitution, permitting rapid lineage diversification observed in nature. All second-step mutations involved replacements by glutamine or glutamic acid of E2 glycoprotein amino acids in the acid-sensitive region, providing a framework to anticipate additional A. albopictus-adaptive mutations. The combination of second-step adaptive mutations into a single, 'super-adaptive' fitness peak also predicted the future emergence of CHIKV strains with even greater transmission efficiency in some current regions of endemic circulation, followed by their likely global spread. PMID:24933611
NASA Astrophysics Data System (ADS)
Zhang, N.; Chen, F. Y.; Wu, X. Q.
2015-07-01
The structure of 38 atoms Ag-Cu cluster is studied by using a combination of a genetic algorithm global optimization technique and density functional theory (DFT) calculations. It is demonstrated that the truncated octahedral (TO) Ag32Cu6 core-shell cluster is less stable than the polyicosahedral (pIh) Ag32Cu6 core-shell cluster from the atomistic models and the DFT calculation shows an agreeable result, so the newfound pIh Ag32Cu6 core-shell cluster is further investigated for potential application for O2 dissociation in oxygen reduction reaction (ORR). The activation energy barrier for the O2 dissociation on pIh Ag32Cu6 core-shell cluster is 0.715 eV, where the d-band center is -3.395 eV and the density of states at the Fermi energy level is maximal for the favorable absorption site, indicating that the catalytic activity is attributed to a maximal charge transfer between an oxygen molecule and the pIh Ag32Cu6 core-shell cluster. This work revises the earlier idea that Ag32Cu6 core-shell nanoparticles are not suitable as ORR catalysts and confirms that Ag-Cu nanoalloy is a potential candidate to substitute noble Pt-based catalyst in alkaline fuel cells.
Zhang, N.; Chen, F. Y.; Wu, X.Q.
2015-01-01
The structure of 38 atoms Ag-Cu cluster is studied by using a combination of a genetic algorithm global optimization technique and density functional theory (DFT) calculations. It is demonstrated that the truncated octahedral (TO) Ag32Cu6 core-shell cluster is less stable than the polyicosahedral (pIh) Ag32Cu6 core-shell cluster from the atomistic models and the DFT calculation shows an agreeable result, so the newfound pIh Ag32Cu6 core-shell cluster is further investigated for potential application for O2 dissociation in oxygen reduction reaction (ORR). The activation energy barrier for the O2 dissociation on pIh Ag32Cu6 core-shell cluster is 0.715 eV, where the d-band center is −3.395 eV and the density of states at the Fermi energy level is maximal for the favorable absorption site, indicating that the catalytic activity is attributed to a maximal charge transfer between an oxygen molecule and the pIh Ag32Cu6 core-shell cluster. This work revises the earlier idea that Ag32Cu6 core-shell nanoparticles are not suitable as ORR catalysts and confirms that Ag-Cu nanoalloy is a potential candidate to substitute noble Pt-based catalyst in alkaline fuel cells. PMID:26148904
NASA Astrophysics Data System (ADS)
Bos, Brent J.; Howard, Joseph M.; Young, Philip J.; Gracey, Renee; Seals, Lenward T.; Ohl, Raymond G.
2012-09-01
During cryogenic vacuum testing of the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM), the global alignment of the ISIM with respect to the designed interface of the JWST optical telescope element (OTE) will be measured through a series of optical characterization tests. These tests will determine the locations and orientations of the JWST science instrument projected focal surfaces and entrance pupils with respect to their corresponding OTE optical interfaces. Thermal, finite element and optical modeling will then be used to predict the on-orbit optical performance of the observatory. If any optical performance non-compliances are identified, the ISIM will be adjusted to improve its performance. If this becomes necessary, ISIM has a variety of adjustments that can be made. The lengths of the six kinematic mount struts that attach the ISIM to the OTE can be modified and five science instrument focus positions and two pupil positions can be individually adjusted as well. In order to understand how to manipulate the ISIM’s degrees of freedom properly and to prepare for the ISIM flight model testing, we have completed a series of optical-mechanical analyses to develop and identify the best approaches for bringing a non-compliant ISIM Element back into compliance. During this work several unknown misalignment scenarios were produced and the simulated optical performance metrics were input into various mathematical modeling and optimization tools to determine how the ISIM degrees of freedom should be adjusted to provide the best overall optical performance.
Global optimization study of small (10 < or = N < or = 120) Pd clusters supported on MgO(100).
Rossi, G; Mottet, C; Nita, F; Ferrando, R
2006-04-13
Experimental evidence suggests that Pd clusters on MgO, known to be good reaction catalysts, have face centered cubic (fcc) epitaxial structures. The structure of such clusters is the result of the interplay of Pd-Pd and Pd-substrate bonds, the former inclined to favor icosahedral (Ih) and decahedral (Dh)-like structures, the latter leading to place Pd atoms on top of oxygen sites, according to an epitaxial stacking. This paper shows the results of a basin-hopping global optimization procedure applied to free and MgO-supported Pd clusters in the size range 10 < or = N < or = 120. Pd-MgO interactions are modeled by an analytical function fitted to ab initio results, while Pd-Pd interactions are modeled by a semiempirical potential. Besides the tight-binding Rosato-Guillopé-Legrand (RGL) potential, we have adopted a modified version of RGL that better reproduces the experimental surface energy of palladium, modifying the attractive part of Pd atoms potential energy. We have compared the two potential models, and as a result, the RGL potential favors clusters with epitaxial arrangements, so that cluster structures are epitaxial fcc in almost all the size ranges considered. On the contrary, the alternative potential model preserves some Ih-like characteristics typical of the free Pd clusters, and it suggests that a transition size from Ih-like to epitaxial structures can take place at about 100 atoms. PMID:16599522
Deshpande, Nandini; Hewston, Patricia; Yoshikawa, Mika
2015-04-01
The ability to safely perform cognitive-motor dual-tasks is critical for independence of older adults. We compared age-associated differences in global and segmental control during dual-task walking in sub-optimal sensory conditions. Thirteen young (YA) and 13 healthy older (OA) adults walked a straight pathway with cognitive dual-task of walking-while-talking (WT) or no-WT under four sensory conditions. On randomly selected trials, visual and vestibular inputs were manipulated using blurring goggles (BV) and Galvanic Vestibular Stimulation (GVS), respectively. Gait speed decreased more in YA than OA during WT. Gait speed increased with GVS with normal vision but not BV. Step length considerably decreased with WT. Trunk roll significantly decreased only in OA with GVS in WT. Head roll significantly decreased with GVS regardless of age. Results indicate GVS-induced adaptations were dependent on available visual information. YA reduced their gait speed more than OA to achieve a similar pace to safely perform WT. GVS resulted in both age-groups to reduce head movement. However, with the addition of WT during GVS, OA also stiffened their trunk. Therefore, with increased attentional demands healthy OA employed different compensatory strategies than YA to maintain postural control. PMID:25617991
NASA Astrophysics Data System (ADS)
Zhen, Wu; Wanji, Chen
2010-04-01
A C0-type global-local higher order theory including interlaminar stress continuity is proposed for the cross-ply laminated composite and sandwich plates in this paper, which is able to a priori satisfy the continuity conditions of transverse shear stresses at interfaces. Moreover, total number of unknowns involved in the model is independent of number of layers. Compared to other higher-order theories satisfying the continuity conditions of transverse shear stresses at interfaces, merit of the proposed model is that the first derivatives of transverse displacement w have been taken out from the in-plane displacement fields, so that the C0 interpolation functions is only required during its finite element implementation. To verify the present model, a C0 three-node triangular element is used for bending analysis of laminated composite and sandwich plates. It ought to be shown that all variables involved in present model are discretized by only using linear interpolation functions within an element. Numerical results show that the C0 plate element based on the present theory may accurately calculate transverse shear stresses without any postprocessing, and the present results agree well with those obtained from the C1-type higher order theory. Compared with the C1 plate bending element, the present finite element is simple, convenient to use and accurate enough.
Dong, Futao; Du, Linxiu; Liu, Xianghua; Xue, Fei
2013-10-15
The influence of Mn,S and B contents on microstructural characteristics, mechanical properties and hydrogen trapping ability of low-carbon Al-killed enamel steel was investigated. The materials were produced and processed in a laboratory and the ultra-fast continuous annealing processing was performed using a continuous annealing simulator. It was found that increasing Mn,S contents in steel can improve its hydrogen trapping ability which is attributed by refined ferrite grains, more dispersed cementite and added MnS inclusions. Nevertheless, it deteriorates mechanical properties of steel sheet. Addition of trace boron results in both good mechanical properties and significantly improved hydrogen trapping ability. The boron combined with nitrogen segregating at grain boundaries, cementite and MnS inclusions, provides higher amount of attractive hydrogen trapping sites and raises the activation energy for hydrogen desorption from them. - Highlights: • We study microstructures and properties in low-carbon Al-killed enamel steel. • Hydrogen diffusion coefficients are measured to reflect fish-scale resistance. • Manganese improves hydrogen trapping ability but decrease deep-drawing ability. • Boron improves both hydrogen trapping ability and deep-drawing ability. • Both excellent mechanical properties and fish-scale resistance can be matched.
NASA Astrophysics Data System (ADS)
Glyavin, M. Yu.; Denisov, G. G.; Zapevalov, V. E.; Kuftin, A. N.; Manuilov, V. N.; Soluyanova, E. A.; Sedov, A. S.; Kholoptsev, V. V.; Chirkov, A. V.
2016-02-01
We present the results of developing the main units of a gyrotron operated in the continuous-wave regime with a generation frequency of 0 .26 THz. To improve selection of the operating mode in a oversized electrodynamic system, the gyrotron works at the fundamental cyclotron harmonic, which anticipates the use of a cryomagnet with a maximum magnetic field of 10 T, which does not require filling with liquid helium. The results of optimizing the electron-optical system, the cavity, and the quasi-optical converter of the output radiation are presented, and the control system, which is developed for the gyrotron setup, is described.
NASA Astrophysics Data System (ADS)
Merder, Tomasz; Warzecha, Marek
2012-08-01
The main differences in the transient zone extent between the individual strands for the former industrial six-strand tundish configuration is the basis for undertaking this study. The aim this study was to improve the casting conditions by proposing the optimal equipment of the tundish working space. For economic reasons, only the variants with different baffles configurations were considered. It was also dictated by the simplicity of construction and the possibility of its implementation by the base operating steel mill. In the current study, industrial plant measurements and mathematical modeling were used. Industrial experimental data were used to diagnose the current state of the industrial tundish and then validate the numerical simulations. After this, the influence of different baffle configurations installed in the tundish on the steel flow characteristic was modeled mathematically. Residence time distribution (RTD) curves are plotted, and individual flow shares for the investigated tundish were estimated based on the curves. Finally, the industrial plant was rebuilt according to the numerical results and additional plant measurements were performed. A result of the appearance of the baffles in the tundish working space was the reduction of the transient zone extent. The results indicate the increasing share of the dispersed plug flow and a decreasing share of the dead volume flow, with a practically unchanging share of well-mixed volume flow in the modified tundish.
Thompson, Kimberly M; Duintjer Tebbens, Radboud J
2016-07-01
Managing the dynamics of vaccine supply and demand represents a significant challenge with very high stakes. Insufficient vaccine supplies can necessitate rationing, lead to preventable adverse health outcomes, delay the achievements of elimination or eradication goals, and/or pose reputation risks for public health authorities and/or manufacturers. This article explores the dynamics of global vaccine supply and demand to consider the opportunities to develop and maintain optimal global vaccine stockpiles for universal vaccines, characterized by large global demand (for which we use measles vaccines as an example), and nonuniversal (including new and niche) vaccines (for which we use oral cholera vaccine as an example). We contrast our approach with other vaccine stockpile optimization frameworks previously developed for the United States pediatric vaccine stockpile to address disruptions in supply and global emergency response vaccine stockpiles to provide on-demand vaccines for use in outbreaks. For measles vaccine, we explore the complexity that arises due to different formulations and presentations of vaccines, consideration of rubella, and the context of regional elimination goals. We conclude that global health policy leaders and stakeholders should procure and maintain appropriate global vaccine rotating stocks for measles and rubella vaccine now to support current regional elimination goals, and should probably also do so for other vaccines to help prevent and control endemic or epidemic diseases. This work suggests the need to better model global vaccine supplies to improve efficiency in the vaccine supply chain, ensure adequate supplies to support elimination and eradication initiatives, and support progress toward the goals of the Global Vaccine Action Plan. PMID:25109229
Cao, Ping; Müller, Tobias K H; Ketterer, Benedikt; Ewert, Stephanie; Theodosiou, Eirini; Thomas, Owen R T; Franzreb, Matthias
2015-07-17
Continued advance of a new temperature-controlled chromatography system, comprising a column filled with thermoresponsive stationary phase and a travelling cooling zone reactor (TCZR), is described. Nine copolymer grafted thermoresponsive cation exchangers (thermoCEX) with different balances of thermoresponsive (N-isopropylacrylamide), hydrophobic (N-tert-butylacrylamide) and negatively charged (acrylic acid) units were fashioned from three cross-linked agarose media differing in particle size and pore dimensions. Marked differences in grafted copolymer composition on finished supports were sourced to base matrix hydrophobicity. In batch binding tests with lactoferrin, maximum binding capacity (qmax) increased strongly as a function of charge introduced, but became increasingly independent of temperature, as the ability of the tethered copolymer networks to switch between extended and collapsed states was lost. ThermoCEX formed from Sepharose CL-6B (A2), Superose 6 Prep Grade (B2) and Superose 12 Prep Grade (C1) under identical conditions displayed the best combination of thermoresponsiveness (qmax,50°C/qmax,10°C ratios of 3.3, 2.2 and 2.8 for supports 'A2', 'B2' and 'C1' respectively) and lactoferrin binding capacity (qmax,50°C∼56, 29 and 45mg/g for supports 'A2', 'B2' and 'C1' respectively), and were selected for TCZR chromatography. With the cooling zone in its parked position, thermoCEX filled columns were saturated with lactoferrin at a binding temperature of 35°C, washed with equilibration buffer, before initiating the first of 8 or 12 consecutive movements of the cooling zone along the column at 0.1mm/s. A reduction in particle diameter (A2→B2) enhanced lactoferrin desorption, while one in pore diameter (B2→C1) had the opposite effect. In subsequent TCZR experiments conducted with thermoCEX 'B2' columns continuously fed with lactoferrin or 'lactoferrin+bovine serum albumin' whilst simultaneously moving the cooling zone, lactoferrin was
Godman, Brian; Malmström, Rickard E.; Diogene, Eduardo; Jayathissa, Sisira; McTaggart, Stuart; Cars, Thomas; Alvarez-Madrazo, Samantha; Baumgärtel, Christoph; Brzezinska, Anna; Bucsics, Anna; Campbell, Stephen; Eriksson, Irene; Finlayson, Alexander; Fürst, Jurij; Garuoliene, Kristina; Gutiérrez-Ibarluzea, Iñaki; Hviding, Krystyna; Herholz, Harald; Joppi, Roberta; Kalaba, Marija; Laius, Ott; Malinowska, Kamila; Pedersen, Hanne B.; Markovic-Pekovic, Vanda; Piessnegger, Jutta; Selke, Gisbert; Sermet, Catherine; Spillane, Susan; Tomek, Dominik; Vončina, Luka; Vlahović-Palčevski, Vera; Wale, Janet; Wladysiuk, Magdalena; van Woerkom, Menno; Zara, Corinne; Gustafsson, Lars L.
2014-01-01
Background: There are potential conflicts between authorities and companies to fund new premium priced drugs especially where there are effectiveness, safety and/or budget concerns. Dabigatran, a new oral anticoagulant for the prevention of stroke in patients with non-valvular atrial fibrillation (AF), exemplifies this issue. Whilst new effective treatments are needed, there are issues in the elderly with dabigatran due to variable drug concentrations, no known antidote and dependence on renal elimination. Published studies showed dabigatran to be cost-effective but there are budget concerns given the prevalence of AF. These concerns resulted in extensive activities pre- to post-launch to manage its introduction. Objective: To (i) review authority activities across countries, (ii) use the findings to develop new models to better manage the entry of new drugs, and (iii) review the implications based on post-launch activities. Methodology: (i) Descriptive review and appraisal of activities regarding dabigatran, (ii) development of guidance for key stakeholder groups through an iterative process, (iii) refining guidance following post launch studies. Results: Plethora of activities to manage dabigatran including extensive pre-launch activities, risk sharing arrangements, prescribing restrictions and monitoring of prescribing post launch. Reimbursement has been denied in some countries due to concerns with its budget impact and/or excessive bleeding. Development of a new model and future guidance is proposed to better manage the entry of new drugs, centering on three pillars of pre-, peri-, and post-launch activities. Post-launch activities include increasing use of patient registries to monitor the safety and effectiveness of new drugs in clinical practice. Conclusion: Models for introducing new drugs are essential to optimize their prescribing especially where concerns. Without such models, new drugs may be withdrawn prematurely and/or struggle for funding. PMID
Chow, Yvonne; Tu, Wang Yung; Wang, David; Ng, Daphne H P; Lee, Yuan Kun
2015-10-01
The microalga Dunaliella tertiolecta synthesizes intracellular glycerol as an osmoticum to counteract external osmotic pressure in high saline environments. The species has recently been found to release and accumulate extracellular glycerol, making it a suitable candidate for sustainable industrial glycerol production if a sufficiently high product titre yield can be achieved. While macronutrients such as nitrogen and phosphorus are essential and well understood, this study seeks to understand the influence of the micronutrient profile on glycerol production. The effects of metallic elements calcium, magnesium, manganese, zinc, cobalt, copper, and iron, as well as boron, on glycerol production as well as cell growth were quantified. The relationship between cell density and glycerol productivity was also determined. Statistically, manganese recorded the highest improvement in glycerol production as well as cell growth. Further experiments showed that manganese availability was associated with higher superoxide dismutase formation, thus suggesting that glycerol production is negatively affected by oxidative stress and the manganese bound form of this enzyme is required in order to counteract reactive oxygen species in the cells. A minimum concentration of 8.25 × 10(-5) g L(-1) manganese was sufficient to overcome this problem and achieve 10 g L(-1) extracellular glycerol, compared to 4 g L(-1) without the addition of manganese. Unlike cell growth, extracellular glycerol production was found to be negatively affected by the amount of calcium present in the normal growth medium, most likely due to the lower cell permeability at high calcium concentrations. The inhibitory effects of iron also affected extracellular glycerol production more significantly than cell growth and several antagonistic interaction effects between various micronutrients were observed. This study indicates how the optimization of these small amounts of nutrients in a two
NASA Astrophysics Data System (ADS)
Klein, Abel
2013-11-01
We prove a unique continuation principle for spectral projections of Schrödinger operators. We consider a Schrödinger operator H = - Δ + V on , and let H Λ denote its restriction to a finite box Λ with either Dirichlet or periodic boundary condition. We prove unique continuation estimates of the type χ I ( H Λ ) W χ I ( H Λ ) ≥ κ χ I ( H Λ ) with κ > 0 for appropriate potentials W ≥ 0 and intervals I. As an application, we obtain optimal Wegner estimates at all energies for a class of non-ergodic random Schrödinger operators with alloy-type random potentials (‘crooked’ Anderson Hamiltonians). We also prove optimal Wegner estimates at the bottom of the spectrum with the expected dependence on the disorder (the Wegner estimate improves as the disorder increases), a new result even for the usual (ergodic) Anderson Hamiltonian. These estimates are applied to prove localization at high disorder for Anderson Hamiltonians in a fixed interval at the bottom of the spectrum.
Zhang, Jilie; Zhang, Huaguang; Liu, Zhenwei; Wang, Yingchun
2015-07-01
In this paper, we consider the problem of developing a controller for continuous-time nonlinear systems where the equations governing the system are unknown. Using the measurements, two new online schemes are presented for synthesizing a controller without building or assuming a model for the system, by two new implementation schemes based on adaptive dynamic programming (ADP). To circumvent the requirement of the prior knowledge for systems, a precompensator is introduced to construct an augmented system. The corresponding Hamilton-Jacobi-Bellman (HJB) equation is solved by adaptive dynamic programming, which consists of the least-squared technique, neural network approximator and policy iteration (PI) algorithm. The main idea of our method is to sample the information of state, state derivative and input to update the weighs of neural network by least-squared technique. The update process is implemented in the framework of PI. In this paper, two new implementation schemes are presented. Finally, several examples are given to illustrate the effectiveness of our schemes. PMID:25704057
Lu, Lei; Xin, Xin; Lu, Hang; Zhu, Liao-dong; Xie, Si-jian; Wu, Yong
2015-10-01
The mature aerobic granular sludge (AGS) was inoculated in a continuous-flow joint constructor reactor to treat low chemical oxygen demand/nitrogen (COD/N) ratio sewage. The effects of aeration intensity and hydraulic retention time (HRT) on the denitrification and phosphorus removal efficiencies and the granular sludge stabilization were investigated. When the aeration intensity was 300 mL x min(-1) (superficial air upflow velocity of 1.2 cm x s(-1)) and the HRT was 7.5 h, the average removal efficiencies of COD, TN and TP were 76.34%, 51.23% and 53.70%, respectively. The mixed liquor suspended solids (MLSS) was only about 2 000 mg x L(-1), the sludge volume index ( SVI) was below 50 mL x g(-1), and the AGS exhibited complete forms and good settling performances. Additionally, the low COD/N ratios sewage could promote the production of extracellular polymeric substances (EPS) of AGS, and the PN proteins in EPS played a pivotal role in the maintenance of AGS stabilization. PMID:26841612
NASA Astrophysics Data System (ADS)
Stackhouse, P. W.; Mikovitz, J. C.; Cox, S. J.; Zhang, T.; Perez, R.; Schlemmer, J.; Sengupta, M.; Knapp, K. R.
2014-12-01
As renewable energy system become more prevalent, improved global long-term, up-to-date records are needed to better understand and quantify the solar resource and variability. Toward this end, a project involving NASA, DOE NREL, SUNY-Albany and the NOAA National Climatic Data Center (NCDC) was initiated to provide NREL with a solar resource mapping production system for improved depiction of global long-term solar resources that provides the capacity for continual updates. This new production system is made possible by the efforts of NOAA and NASA to completely reprocess the International Satellite Cloud Climatology Project (ISCCP) data set that provides satellite visible and infrared radiances together with retrieved cloud and surface properties on a 3-hourly basis beginning from July 1983 at an effective 10 km resolution. Thus, working with SUNY and NCDC, NASA will develop and test an improved production system that will yield an operational production system for NREL to continually update the Earth's solar resource. In this presentation, we provide a general overview of this project together with samples of the new solar irradiance mapped data products and comparisons to surface measurements at various locations across the world. Here, a three-year prototype of the anticipated ISCCP data set called GridSat is used to assess the algorithms and demonstrate the production system. GridSat maps together cross-calibrated visible and IR reflectances from all the world's geosynchronous satellites at 10 km and 3-hourly respectively. The results are shown and discussed in comparison to existing solar data products. Additionally, the solar irradiance values are compared to various Baseline Surface Radiation Network surface site measurements and other high quality surface measurements. The statistics of the agreement between the measurements and new satellite estimates are also reviewed. The team is now testing a beta release of the revised ISCCP data set through the NOAA
Dusek, Jaromir; Dohnal, Michal; Snehota, Michal; Sobotkova, Martina; Ray, Chittaranjan; Vogel, Tomas
2015-01-01
The fate of pesticides in tropical soils is still not understood as well as it is for soils in temperate regions. In this study, water flow and transport of bromide tracer and five pesticides (atrazine, imazaquin, sulfometuron methyl, S-metolachlor, and imidacloprid) through an undisturbed soil column of tropical Oxisol were analyzed using a one-dimensional numerical model. The numerical model is based on Richards' equation for solving water flow, and the advection-dispersion equation for solving solute transport. Data from a laboratory column leaching experiment were used in the uncertainty analysis using a global optimization methodology to evaluate the model's sensitivity to transport parameters. All pesticides were found to be relatively mobile (sorption distribution coefficients lower than 2 cm(3) g(-1)). Experimental data indicated significant non-conservative behavior of bromide tracer. All pesticides, with the exception of imidacloprid, were found less persistent (degradation half-lives smaller than 45 days). Three of the five pesticides (atrazine, sulfometuron methyl, and S-metolachlor) were better described by the linear kinetic sorption model, while the breakthrough curves of imazaquin and imidacloprid were more appropriately approximated using nonlinear instantaneous sorption. Sensitivity analysis suggested that the model is most sensitive to sorption distribution coefficient. The prediction limits contained most of the measured points of the experimental breakthrough curves, indicating adequate model concept and model structure for the description of transport processes in the soil column under study. Uncertainty analysis using a physically-based Monte Carlo modeling of pesticide fate and transport provides useful information for the evaluation of chemical leaching in Hawaii soils. PMID:25703186
NASA Astrophysics Data System (ADS)
Dusek, Jaromir; Dohnal, Michal; Snehota, Michal; Sobotkova, Martina; Ray, Chittaranjan; Vogel, Tomas
2015-04-01
The fate of pesticides in tropical soils is still not understood as well as it is for soils in temperate regions. In this study, water flow and transport of bromide tracer and five pesticides (atrazine, imazaquin, sulfometuron methyl, S-metolachlor, and imidacloprid) through an undisturbed soil column of tropical Oxisol were analyzed using a one-dimensional numerical model. The numerical model is based on Richards' equation for solving water flow, and the advection-dispersion equation for solving solute transport. Data from a laboratory column leaching experiment were used in the uncertainty analysis using a global optimization methodology to evaluate the model's sensitivity to transport parameters. All pesticides were found to be relatively mobile (sorption distribution coefficients lower than 2 cm3 g- 1). Experimental data indicated significant non-conservative behavior of bromide tracer. All pesticides, with the exception of imidacloprid, were found less persistent (degradation half-lives smaller than 45 days). Three of the five pesticides (atrazine, sulfometuron methyl, and S-metolachlor) were better described by the linear kinetic sorption model, while the breakthrough curves of imazaquin and imidacloprid were more appropriately approximated using nonlinear instantaneous sorption. Sensitivity analysis suggested that the model is most sensitive to sorption distribution coefficient. The prediction limits contained most of the measured points of the experimental breakthrough curves, indicating adequate model concept and model structure for the description of transport processes in the soil column under study. Uncertainty analysis using a physically-based Monte Carlo modeling of pesticide fate and transport provides useful information for the evaluation of chemical leaching in Hawaii soils.
Uriel, Nir; Morrison, Kerry A; Garan, Arthur R; Kato, Tomoko; Yuzefpolskaya, Melana; Latif, Farhana; Restaino, Susan W; Mancini, Donna M; Flannery, Margaret; Takayama, Hiroo; John, Ranjit; Colombo, Paolo C; Naka, Yoshifumi; Jorde, Ulrich P
2012-01-01
Objective Develop a novel approach of optimizing continuous flow left ventricular assist device (CF-LVAD) function and diagnosing device malfunctions. Background In CF-LVAD patients, the dynamic interaction of device speed, left and right ventricular decompression, and valve function can be assessed during an echocardiography-monitored speed ramp-test. Methods We devised a unique ramp-test protocol to be routinely done at the time of discharge for speed optimization and/or if device malfunction was suspected. The patient’s left ventricular end diastolic dimension (LVEDD), frequency of aortic valve (AV) opening, valvular insufficiency, blood pressure, and CF-LVAD parameters were recorded at increments of 400 rpm from 8,000 rpm to 12,000 rpm. The results of the speed designations were plotted, and linear function slopes for LVEDD, PI, and power were calculated. Results Fifty-two ramp-tests from 39 patients were prospectively collected and analyzed. Twenty-eight ramp-tests were performed for speed optimization, and speed was changed in 17 (61%) with a mean absolute value adjustment of 424±211 rpm. Seventeen patients had ramp-tests performed for suspected device thrombosis and 10 tests were suspicious for device thrombosis; these patients were then treated with intensified anticoagulation and/or device exchange/emergent transplant. Device thrombosis was confirmed in 8/10 cases at the time of emergent device exchange or transplant. All patients with device thrombosis, but none of the remaining patients, had a LVEDD slope > −0.16. Conclusion Ramp-tests facilitated optimal speed changes and device malfunction detection, and may be used to monitor the effects of therapeutic interventions and need for surgical intervention in CF-LVAD patients. PMID:23040584
NASA Astrophysics Data System (ADS)
Gleason, D. M.
An optimally estimated earth gravity model (EGM), consisting of a set of geopotential coefficients through a maximum degree and order of 360, has been created from a global set of 259,200 30 deg by 30 deg surface mean gravity anomalies. The model is optimal in the sense that its derivation follows the principles of least-squares collocation which results in the coefficients' error variance/covariance matrix having a minimal trace value. This paper presents: (1) an overview of the mathematical and geodetic principles behind the construction of the model, (2) a discussion on the practical concerns and problems associated with the implementation of these principles on a present-day high speed computer, (3) a brief description of the global 30 deg input mean anomaly file used, (4) an analysis of the statistical properties of the coefficients and their accuracies, and (5) a prognosis for the future.
NASA Astrophysics Data System (ADS)
Lanning, Oliver J.; Habershon, Scott; Harris, Kenneth D. M.; Johnston, Roy L.; Kariuki, Benson M.; Tedesco, Emilio; Turner, Giles W.
2000-02-01
Two global optimization problems of current interest in solid state sciences are crystal structure prediction (optimization of structure with respect to computed energy) and direct-space techniques for crystal structure solution from powder diffraction data (optimization of structure with respect to R-factor). As the energy and R-factor hypersurfaces are based on the same parameter space but have differing characteristics, there is a direct opportunity to blend these approaches together in the definition of a hybrid hypersurface. A strategy for combining R-factor and energy within a direct-space method for structure solution from powder diffraction data is proposed. Normalized energy and normalized R-factor functions are defined, and are combined using a sliding weighting function to give a hybrid figure-of-merit G, which behaves as energy when energy is high (thus using energy to guide the calculation towards energetically plausible structures) and gives increasing importance to R-factor as lower energies are approached. This concept of a `guiding function' may be widely applicable in other global optimization problems.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong; Shang, Jianli
2012-09-10
This article presents the fundamental principles of operational performance of a continuous wave (cw) thin-disk laser with multiple disks in one resonator. Based on the model of an end-pumped Yb:YAG thin-disk laser with nonuniform temperature distribution, the effect of the multiple disks in one resonator is considered. The analytic expressions are derived to analyze the laser output intensity, laser intensity in the resonator, threshold intensity, and the optical efficiency of a thin-disk laser with multiple disks arranged in series. The dependence of output coupler reflectivity and the number of thin disks on various parameters are investigated, which are useful to determine the optimal output coupler reflectivity of the thin-disk lasers and control the laser intensity in the resonator. PMID:22968282
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Jackson, T. J.; Rawls, W. J.
2000-12-01
Spatial soil water-holding capacities were estimated for the Food and Agriculture Organization (FAO) digital Soil Map of the World (SMW) by employing continuous pedotransfer functions (PTF) within global pedon databases and linking these results to the SMW. The procedure first estimated representative soil properties for the FAO soil units by statistical analyses and taxotransfer depth algorithms [Food and Agriculture Organization (FAO), 1996]. The representative soil properties estimated for two layers of depths (0-30 and 30-100 cm) included particle-size distribution, dominant soil texture, organic carbon content, coarse fragments, bulk density, and porosity. After representative soil properties for the FAO soil units were estimated, these values were substituted into three different pedotransfer functions (PTF) models by Rawls et al. [1982], Saxton et al. [1986], and Batjes [1996a]. The Saxton PTF model was finally selected to calculate available water content because it only required particle-size distribution data and results closely agreed with the Rawls and Batjes PTF models that used both particle-size distribution and organic matter data. Soil water-holding capacities were then estimated by multiplying the available water content by the soil layer thickness and integrating over an effective crop root depth of 1 m or less (i.e., encountered shallow impermeable layers) and another soil depth data layer of 2.5 m or less.
Kameyama, Shumpei; Imaki, Masaharu; Hirano, Yoshihito; Ueno, Shinichi; Kawakami, Shuji; Sakaizawa, Daisuke; Kimura, Toshiyoshi; Nakajima, Masakatsu
2011-05-10
A feasibility study is carried out on a 1.6 μm continuous-wave modulation laser absorption spectrometer system for measurement of global CO(2)concentration from a satellite. The studies are performed for wavelength selection and both systematic and random error analyses. The systematic error in the differential absorption optical depth (DAOD) is mainly caused by the temperature estimation error, surface pressure estimation error, altitude estimation error, and ON wavelength instability. The systematic errors caused by unwanted backscattering from background aerosols and dust aerosols can be reduced to less than 0.26% by using a modulation frequency of around 200 kHz, when backscatter coefficients of these unwanted backscattering have a simple profile on altitude. The influence of backscattering from cirrus clouds is much larger than that of dust aerosols. The transmission power required to reduce the random error in the DAOD to 0.26% is determined by the signal-to-noise ratio and the carrier-to-noise ratio calculations. For a satellite altitude of 400 km and receiving aperture diameter of 1 m, the required transmission power is approximately 18 W and 70 W when albedo is 0.31 and 0.08, respectively; the total measurement time in this case is 4 s, which corresponds to a horizontal resolution of 28 km. PMID:21556107
Arun, C; Sivashanmugam, P
2015-10-01
Reuse and management of organic solid waste, reduce the environmental impact on human health and increase the economic status by generating valuable products for current and novel applications. Garbage enzyme is one such product produced from fermentation of organic solid waste and it can be used as liquid fertilizer, antimicrobial agents, treatment of domestic wastewater, municipal and industrial sludge treatment, etc. The semi-continuous production of garbage enzyme in large quantity at minimal time period and at lesser cost is needed to cater for treatment of increasing quantities of industrial waste activated sludge. This necessitates a parameter for monitoring and control for the scaling up of current process on semi-continuous basis. In the present study a RP-HPLC (Reversed Phase-High Performance Liquid Chromatography) method is used for quantification of standard organic acid at optimized condition 30°C column oven temperature, pH 2.7, and 0.7 ml/min flow rate of the mobile phase (potassium dihydrogen phosphate in water) at 50mM concentration. The garbage enzyme solution collected in 15, 30, 45, 60, 75 and 90 days were used as sample to determine the concentration of organic acid. Among these, 90th day sample showed the maximum concentration of 78.14 g/l of acetic acid in garbage enzyme, whereas other organic acids concentration got decreased when compare to the 15th day sample. This result confirms that the matured garbage enzyme contains a higher concentration of acetic acid and thus it can be used as a monitoring parameter for semi-continuous production of garbage enzyme in large scale. PMID:26205805
NASA Astrophysics Data System (ADS)
Auluck, S. K. H.
2014-12-01
Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.
Melby, Melissa K; Loh, Lawrence C; Evert, Jessica; Prater, Christopher; Lin, Henry; Khan, Omar A
2016-05-01
Increasing demand for global health education in medical training has driven the growth of educational programs predicated on a model of short-term medical service abroad. Almost two-thirds of matriculating medical students expect to participate in a global health experience during medical school, continuing into residency and early careers. Despite positive intent, such short-term experiences in global health (STEGHs) may exacerbate global health inequities and even cause harm. Growing out of the "medical missions" tradition, contemporary participation continues to evolve. Ethical concerns and other disciplinary approaches, such as public health and anthropology, can be incorpo rated to increase effectiveness and sustainability, and to shift the culture of STEGHs from focusing on trainees and their home institutions to also considering benefits in host communities and nurtur ing partnerships. The authors propose four core principles to guide ethical development of educational STEGHs: (1) skills building in cross-cultural effective ness and cultural humility, (2) bidirectional participatory relationships, (3) local capacity building, and (4) long-term sustainability. Application of these principles highlights the need for assessment of STEGHs: data collection that allows transparent compar isons, standards of quality, bidirectionality of agreements, defined curricula, and ethics that meet both host and sending countries' standards and needs. To capture the enormous potential of STEGHs, a paradigm shift in the culture of STEGHs is needed to ensure that these experiences balance training level, personal competencies, medical and cross-cultural ethics, and educational objectives to minimize harm and maximize benefits for all involved. PMID:26630608
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
NASA Astrophysics Data System (ADS)
Fang, Xiaohua; Ma, Yingjuan; Brain, David; Dong, Yaxue; Lillis, Robert
2015-12-01
We present a time-dependent MHD study of the controlling effects of the Mars crustal field on atmospheric escape. We calculate globally integrated planetary ion loss rates under quiet solar conditions considering the continuous rotation of crustal anomalies with the planet. It is found that the rotating crustal field plays an important role in controlling atmospheric escape. Significant time variation of ˜20% and ˜50% is observed during the entire rotation period for O+ and for O2+ and CO2+, respectively. The control is exerted mainly through two processes. First, the crustal magnetic pressure over the subsolar regime controls solar wind penetration and mass loading and therefore the escaping planetary ion source. There is a strong negative correlation between the magnetic pressure and ion loss, with a time lag of <1 h for O+ and ˜2.5 h for O2+ and CO2+. Second, the crustal magnetic pressure near the terminator region controls the cross-section area between the induced magnetospheric boundary and 100 km altitude at the terminator. The change in day-night connection regulates the extent to which planetary ions created on the dayside can be ultimately carried away by the solar wind and escape Mars. There is a strong positive correlation between the cross-section area and ion loss, with no significant time lag. As the planet rotates, the dayside process and the terminator process work together to control the total amount of escaping planetary ions. However, their relative importance changes with the local time of the strong crustal field region.
Kanjilal, Baishali; Noshadi, Iman; Bautista, Eddy J; Srivastava, Ranjan; Parnas, Richard S
2015-03-01
1,3-propanediol (1,3-PD) was produced with a robust fermentation process using waste glycerol feedstock from biodiesel production and a soil-based bacterial inoculum. An iterative inoculation method was developed to achieve independence from soil and selectively breed bacterial populations capable of glycerol metabolism to 1,3-PD. The inoculum showed high resistance to impurities in the feedstock. 1,3-PD selectivity and yield in batch fermentations was optimized by appropriate nutrient compositions and pH control. The batch yield of 1,3-PD was maximized to ~0.7 mol/mol for industrial glycerol which was higher than that for pure glycerin. 16S rDNA sequencing results show a systematic selective enrichment of 1,3-PD producing bacteria with iterative inoculation and subsequent process control. A statistical design of experiments was carried out on industrial glycerol batches to optimize conditions, which were used to run two continuous flow stirred-tank reactor (CSTR) experiments over a period of >500 h each. A detailed analysis of steady states at three dilution rates is presented. Enhanced specific 1,3-PD productivity was observed with faster dilution rates due to lower levels of solvent degeneration. 1,3-PD productivity, specific productivity, and yield of 1.1 g/l hr, 1.5 g/g hr, and 0.6 mol/mol of glycerol were obtained at a dilution rate of 0.1 h(-1)which is bettered only by pure strains in pure glycerin feeds. PMID:25480510
Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M Natália D S
2008-11-15
Up to now, very few reports have been published concerning the application of multiobjective optimization (MOOP) techniques to quantitative structure-activity relationship (QSAR) studies. However, none reports the optimization of objectives related directly to the desired pharmaceutical profile of the drug. In this work, for the first time, it is proposed a MOOP method based on Derringer's desirability function that allows conducting global QSAR studies considering simultaneously the pharmacological, pharmacokinetic and toxicological profile of a set of molecule candidates. The usefulness of the method is demonstrated by applying it to the simultaneous optimization of the analgesic, antiinflammatory, and ulcerogenic properties of a library of fifteen 3-(3-methylphenyl)-2-substituted amino-3H-quinazolin-4-one compounds. The levels of the predictor variables producing concurrently the best possible compromise between these properties is found and used to design a set of new optimized drug candidates. Our results also suggest the relevant role of the bulkiness of alkyl substituents on the C-2 position of the quinazoline ring over the ulcerogenic properties for this family of compounds. Finally, and most importantly, the desirability-based MOOP method proposed is a valuable tool and shall aid in the future rational design of novel successful drugs. PMID:18452123
Hyde, Jason R; Bourne, Richard A; Noda, Isao; Stephenson, Phil; Poliakoff, Martyn
2004-11-01
A new approach for optimization and monitoring of continuous reactions has been developed using 2D correlation methods for the analysis of GC data (2DCOR-GC). 2DCOR-GC maps are obtained following perturbation of the system that allow the effect of changing reaction parameters such as time, temperature, pressure, or concentration to be both monitored and sequenced with regard to changes in the raw GC data. In this paper, we describe the application of the 2DCOR-GC technique to monitoring the reverse water-gas shift reaction in scCO(2). 2DCOR-GC is combined with FT-IR data to validate the methodology. We also report the application of 2DCOR-GC to probe the mechanism of the alkylation of m-cresol with isopropyl alcohol in scCO(2) using Nafion SAC-13 as the catalyst. These results identify coeluting peaks that could easily be missed without exhaustive method development. PMID:15516110
Assured Optimism in a Scottish Girls' School: Habitus and the (Re)production of Global Privilege
ERIC Educational Resources Information Center
Forbes, Joan; Lingard, Bob
2015-01-01
This paper examines how high levels of social-cultural connectedness and academic excellence, inflected by gender and social class, constitute a particular school habitus of "assured optimism" at an elite Scottish girls' school. In Bourdieuian terms, Dalrymple is a "forcing ground" for the "intense cultivation"…
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match
Chen, Tinggui; Xiao, Renbin
2014-01-01
Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023
Chen, Tinggui; Xiao, Renbin
2014-01-01
Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023
Wang, Nan; Khan, Wajahatullah; Smith, Donald L.
2012-01-01
Lipo-chitooligosaccharides (LCOs), signal compounds produced by N2-fixing rhizobacteria after isoflavone induction, initiate nodule formation in host legumes. Given LCOs' structural similarity to pathogen-response-eliciting chitin oligomers, foliar application of LCOs was tested for ability to induce stress-related genes under optimal growth conditions. In order to study the effects of LCO foliar spray under stressed conditions, soybean (Glycine max) seedlings grown at optimal temperature were transferred to sub-optimal temperature. After a 5-day acclimation period, the first trifoliate leaves were sprayed with 10−7 M LCO (NodBj-V (C18∶1, MeFuc)) purified from genistein-induced Bradyrhizobium japonicum culture, and harvested at 0 and 48 h following treatment. Microarray analysis was performed using Affymetrix GeneChip® Soybean Genome Arrays. Compared to the control at 48 h after LCO treatment, a total of 147 genes were differentially expressed as a result of LCO treatment, including a number of stress-related genes and transcription factors. In addition, during the 48 h time period following foliar spray application, over a thousand genes exhibited differential expression, including hundreds of those specific to the LCO-treated plants. Our results indicated that the dynamic soybean foliar transcriptome was highly responsive to LCO treatment. Quantitative real-time PCR (qPCR) validated the microarray data. PMID:22348109
Tokarik, Monika; Sjöberg, Folke; Balik, Martin; Pafcuga, Igor; Broz, Ludomir
2013-01-01
This pilot trial aims at gaining support for the optimization of acute burn resuscitation through noninvasive continuous real-time hemodynamic monitoring using arterial pulse contour analysis. A group of 21 burned patients meeting preliminary criteria (age range 18-75 years with second- third- degree burns and TBSA ≥10-75%) was randomized during 2010. A hemodynamic monitoring through lithium dilution cardiac output was used in 10 randomized patients (LiDCO group), whereas those without LiDCO monitoring were defined as the control group. The modified Brooke/Parkland formula as a starting resuscitative formula, balanced crystalloids as the initial solutions, urine output of 0.5 ml/kg/hr as a crucial value of adequate intravascular filling were used in both groups. Additionally, the volume and vasopressor/inotropic support were based on dynamic preload parameters in the LiDCO group in the case of circulatory instability and oligouria. Statistical analysis was done using t-tests. Within the first 24 hours postburn, a significantly lower consumption of crystalloids was registered in LiDCO group (P = .04). The fluid balance under LiDCO control in combination with hourly diuresis contributed to reducing the cumulative fluid balance approximately by 10% compared with fluid management based on standard monitoring parameters. The amount of applied solutions in the LiDCO group got closer to Brooke formula whereas the urine output was at the same level in both groups (0.8 ml/kg/hr). The new finding in this study is that when a fluid resuscitation is based on the arterial waveform analysis, the initial fluid volume provided was significantly lower than that delivered on the basis of physician-directed fluid resuscitation (by urine output and mean arterial pressure). PMID:23511280
Yan, Dahai; Peng, Zheng; Liu, Yuqiang; Li, Li; Huang, Qifei; Xie, Minghui; Wang, Qi
2015-01-01
The consumption of milk in China is increasing as living standards rapidly improve, and huge amounts of aseptic composite milk packaging waste are being generated. Aseptic composite packaging is composed of paper, polyethylene, and aluminum. It is difficult to separate the polyethylene and aluminum, so most of the waste is currently sent to landfill or incinerated with other municipal solid waste, meaning that enormous amounts of resources are wasted. A wet process technique for separating the aluminum and polyethylene from the composite materials after the paper had been removed from the original packaging waste was studied. The separation efficiency achieved using different separation reagents was compared, different separation mechanisms were explored, and the impacts of a range of parameters, such as the reagent concentration, temperature, and liquid-solid ratio, on the separation time and aluminum loss ratio were studied. Methanoic acid was found to be the optimal separation reagent, and the suitable conditions were a reagent concentration of 2-4 mol/L, a temperature of 60-80°C, and a liquid-solid ratio of 30 L/kg. These conditions allowed aluminum and polyethylene to be separated in less than 30 min, with an aluminum loss ratio of less than 3%. A mass balance was produced for the aluminum-polyethylene separation system, and control technique was developed to keep the ion concentrations in the reaction system stable. This allowed a continuous industrial-scale process for separating aluminum and polyethylene to be developed, and a demonstration facility with a capacity of 50t/d was built. The demonstration facility gave polyethylene and aluminum recovery rates of more than 98% and more than 72%, respectively. Separating 1t of aluminum-polyethylene composite packaging material gave a profit of 1769 Yuan, meaning that an effective method for recycling aseptic composite packaging waste was achieved. PMID:25458854
Optimization of the design of a crucible for a SiC sublimation growth system using a global model
NASA Astrophysics Data System (ADS)
Chen, X. J.; Liu, L. J.; Tezuka, H.; Usuki, Y.; Kakimoto, K.
2008-04-01
Induction heating, temperature field and growth rate for a sublimation growth system of silicon carbide were calculated by using a global simulation model. The effects of shape of the crucible on temperature distribution and growth rate were investigated. It was found that thickness of the substrate holder, distance between the powder and substrate, and angle between the crucible wall and powder free surface are important for growth rate and crystal quality. Finally, a curved powder free surface was also studied. The results indicate that the use of a curved powder free surface is also an effective method for obtaining a higher growth rate.
Zumla, Alimuddin; Alagaili, Abdulaziz N; Cotten, Matthew; Azhar, Esam I
2016-01-01
Media and World Health Organization (WHO) attention on Zika virus transmission at the 2016 Rio Olympic Games and the 2015 Ebola virus outbreak in West Africa diverted the attention of global public health authorities from other lethal infectious diseases with epidemic potential. Mass gatherings such as the annual Hajj pilgrimage hosted by Kingdom of Saudi Arabia attract huge crowds from all continents, creating high-risk conditions for the rapid global spread of infectious diseases. The highly lethal Middle Eastern respiratory syndrome coronavirus (MERS-CoV) remains in the WHO list of top emerging diseases likely to cause major epidemics. The 2015 MERS-CoV outbreak in South Korea, in which 184 MERS cases including 33 deaths occurred in 2 months, that was imported from the Middle East by a South Korean businessman was a wake-up call for the global community to refocus attention on MERS-CoV and other emerging and re-emerging infectious diseases with epidemic potential. The international donor community and Middle Eastern countries should make available resources for, and make a serious commitment to, taking forward a "One Health" global network for proactive surveillance, rapid detection, and prevention of MERS-CoV and other epidemic infectious diseases threats. PMID:27604081
NASA Astrophysics Data System (ADS)
Kvaerna, Tormod; Gibbons, Steven; Fyen, Jan; Roth, Michael
2014-05-01
The IMS infrasound array I37NO near Bardufoss in northern Norway became operational in October 2013 and was certified on December 19, 2013. The 10-element array has an aperture of approximately 1.5 km and is deployed in low-lying woodland about 2.5 degrees north of the Arctic Circle. Its location in the European Arctic means that the array fills an important gap in the global IMS infrasound monitoring network. In addition, I37NO extends significantly the network of infrasound stations in northern Norway, Sweden, Finland, and Russia: operated by NORSAR, the Swedish Institute for Space Physics, and the Kola Regional Seismological Center in Apatity. The geometry is based on the highly successful classical design for regional seismic arrays with sensors arranged in two approximately concentric rings surrounding a central site. A 4-site subarray with an aperture of approximately 450 meters, comprising the central element and the inner ring of 3 sites, provides an excellent array response function and detection capability for relatively high frequency (2-4 Hz) signals. Such signals are usually generated by events at distances within 1000 km and often lack energy in the lower frequency bands for which the larger aperture arrays provide signal coherence. These so-called regional signals are of increasing importance in civil applications and the need to characterize the infrasonic wavefield over these distances is increasingly important in the remote monitoring of natural hazards. I37NO will provide good characterization of Ground Truth industrial and military explosions in the region which are well-constrained by seismic data. The full array aperture provides excellent backazimuth and slowness resolution for lower frequency signals and it is anticipated that I37NO will contribute significantly to the detection and association of signals on a global scale. Already within the first few months of operation, we have examples of high-quality recordings from meteors, accidental
Hierarchical models and iterative optimization of hybrid systems
NASA Astrophysics Data System (ADS)
Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.
2016-06-01
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Luthra, Suman A; Hodge, Ian M; Pikal, Michael J
2008-09-01
The purpose of this research was to investigate the effect of annealing on the molecular mobility in lyophilized glasses using differential scanning calorimetry (DSC) and isothermal microcalorimetry (IMC) techniques. A second objective that emerged was a systematic study of the unusual pre-T(g) thermal events that were observed during DSC warming scans after annealing. Aspartame lyophilized with three different excipients; sucrose, trehalose and poly vinyl pyrrolidone (PVP) was studied. The aim of this work was to quantify the decrease in mobility in amorphous lyophilized aspartame formulations upon systematic postlyophilization annealing. DSC scans of aspartame:sucrose formulation (T(g) = 73 degrees C) showed the presence of a pre-T(g) endotherm which disappeared upon annealing. Aspartame:trehalose (T(g) = 112 degrees C) and aspartame:PVP (T(g) = 100 degrees C) showed a broad exotherm before T(g) and annealing caused appearance of endothermic peaks before T(g). This work also employed IMC to measure the global molecular mobility represented by structural relaxation time (tau(beta)) in both un-annealed and annealed formulations. The effect of annealing on the enthalpy relaxation of lyophilized glasses, as measured by DSC and IMC, was consistent with the behavior predicted using the Tool-Narayanaswamy-Moynihan (TNM) phenomenology (Luthra et al., 2007, in press). The results show that the systems annealed at T(g) -15 degrees C to T(g) -20 degrees C have the lowest molecular mobility. PMID:18200533