Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.
Elhossini, Ahmed; Areibi, Shawki; Dony, Robert
2010-01-01
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
NASA Astrophysics Data System (ADS)
Ghanei, A.; Assareh, E.; Biglari, M.; Ghanbarzadeh, A.; Noghrehabadi, A. R.
2014-10-01
Many studies are performed by researchers about shell and tube heat exchanger (STHE) but the multi-objective particle swarm optimization (PSO) technique has never been used in such studies. This paper presents application of thermal-economic multi-objective optimization of STHE using PSO. For optimal design of a STHE, it was first thermally modeled using e-number of transfer units method while Bell-Delaware procedure was applied to estimate its shell side heat transfer coefficient and pressure drop. Multi objective PSO (MOPSO) method was applied to obtain the maximum effectiveness (heat recovery) and the minimum total cost as two objective functions. The results of optimal designs were a set of multiple optimum solutions, called `Pareto optimal solutions'. In order to show the accuracy of the algorithm, a comparison is made with the non-dominated sorting genetic algorithm (NSGA-II) and MOPSO which are developed for the same problem.
CPG Network Optimization for a Biomimetic Robotic Fish via PSO.
Yu, Junzhi; Wu, Zhengxing; Wang, Ming; Tan, Min
2016-09-01
In this brief, we investigate the parameter optimization issue of a central pattern generator (CPG) network governed forward and backward swimming for a fully untethered, multijoint biomimetic robotic fish. Considering that the CPG parameters are tightly linked to the propulsive performance of the robotic fish, we propose a method for determination of relatively optimized control parameters. Within the framework of evolutionary computation, we use a combination of dynamic model and particle swarm optimization (PSO) algorithm to seek the CPG characteristic parameters for an enhanced performance. The PSO-based optimization scheme is validated with extensive experiments conducted on the actual robotic fish. Noticeably, the optimized results are shown to be superior to previously reported forward and backward swimming speeds.
Application of PSO algorithm in short-term optimization of reservoir operation.
SaberChenari, Kazem; Abghari, Hirad; Tabari, Hossein
2016-12-01
The optimization of the operation of existing water systems such as dams is very important for water resource planning and management especially in arid and semi-arid lands. Due to budget and operational water resource limitations and environmental problems, the operation optimization is gradually replaced by new systems. The operation optimization of water systems is a complex, nonlinear, multi-constraint, and multidimensional problem that needs robust techniques. In this article, the practical swarm optimization (PSO) was adopted for solving the operation problem of multipurpose Mahabad reservoir dam in the northwest of Iran. The desired result or target function is to minimize the difference between downstream monthly demand and release. The method was applied with considering the reduction probabilities of inflow for the four scenarios of normal and drought conditions. The results showed that in most of the scenarios for normal and drought conditions, released water obtained by the PSO model was equal to downstream demand and also, the reservoir volume was reducing for the probabilities of inflow. The PSO model revealed a good performance to minimize the reservoir water loss, and this operation policy can be an appropriate policy in the drought condition for the reservoir.
Nery, Gesner A; Martins, Márcio A F; Kalid, Ricardo
2014-03-01
This paper describes the development of a method to optimally tune constrained MPC algorithms with model uncertainty. The proposed method is formulated by using the worst-case control scenario, which is characterized by the Morari resiliency index and the condition number, and a given nonlinear multi-objective performance criterion. The resulting constrained mixed-integer nonlinear optimization problem is solved on the basis of a modified version of the particle swarm optimization technique, because of its effectiveness in dealing with this kind of problem. The performance of this PSO-based tuning method is evaluated through its application to the well-known Shell heavy oil fractionator process.
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.
2013-09-01
In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.
An approach for reliability analysis of industrial systems using PSO and IFS technique.
Garg, Harish; Rani, Monica
2013-11-01
The main objective of this paper is to present a technique for computing the membership functions of the intuitionistic fuzzy set (IFS) by utilizing imprecise, uncertain and vague data. In literature so far, membership functions of IFS are computed via using fuzzy arithmetic operations within collected data and hence contain a wide range of uncertainties. Thus it is necessary for optimizing these spread by formulating a nonlinear optimization problem through ordinary arithmetic operations instead of fuzzy operations. Particle swarm optimization (PSO) has been used for constructing their membership functions. Sensitivity as well as performance analysis has also been conducted for finding the critical component of the system. Finally the computed results are compared with existing results. The suggested framework has been illustrated with the help of a case.
PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.
Liu, Xiaoyong; Fu, Hui
2014-01-01
Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2016-06-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed (N), feed rate (f) and depth of cut (d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3-5-1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.
Subasi, Abdulhamit
2013-06-01
Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
A Hybrid PSO-BFGS Strategy for Global Optimization of Multimodal Functions.
Shutao Li; Mingkui Tan; Tsang, I W; Kwok, James Tin-Yau
2011-08-01
Particle swarm optimizer (PSO) is a powerful optimization algorithm that has been applied to a variety of problems. It can, however, suffer from premature convergence and slow convergence rate. Motivated by these two problems, a hybrid global optimization strategy combining PSOs with a modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is presented in this paper. The modified BFGS method is integrated into the context of the PSOs to improve the particles' local search ability. In addition, in conjunction with the territory technique, a reposition technique to maintain the diversity of particles is proposed to improve the global search ability of PSOs. One advantage of the hybrid strategy is that it can effectively find multiple local solutions or global solutions to the multimodal functions in a box-constrained space. Based on these local solutions, a reconstruction technique can be adopted to further estimate better solutions. The proposed method is compared with several recently developed optimization algorithms on a set of 20 standard benchmark problems. Experimental results demonstrate that the proposed approach can obtain high-quality solutions on multimodal function optimization problems.
NASA Astrophysics Data System (ADS)
Handayani, D.; Nuraini, N.; Tse, O.; Saragih, R.; Naiborhu, J.
2016-04-01
PSO is a computational optimization method motivated by the social behavior of organisms like bird flocking, fish schooling and human social relations. PSO is one of the most important swarm intelligence algorithms. In this study, we analyze the convergence of PSO when it is applied to with-in host dengue infection treatment model simulation in our early research. We used PSO method to construct the initial adjoin equation and to solve a control problem. Its properties of control input on the continuity of objective function and ability of adapting to the dynamic environment made us have to analyze the convergence of PSO. With the convergence analysis of PSO we will have some parameters that ensure the convergence result of numerical simulations on this model using PSO.
Self-modeling curve resolution (SMCR) by particle swarm optimization (PSO).
Shinzawa, Hideyuki; Jiang, Jian-Hui; Iwahashi, Makio; Noda, Isao; Ozaki, Yukihiro
2007-07-09
Particle swarm optimization (PSO) combined with alternating least squares (ALS) is introduced to self-modeling curve resolution (SMCR) in this study for effective initial estimate. The proposed method aims to search concentration profiles or pure spectra which give the best resolution result by PSO. SMCR sometimes yields insufficient resolution results by getting trapped in a local minimum with poor initial estimates. The proposed method enables to reduce an undesirable effect of the local minimum in SMCR due to the advantages of PSO. Moreover, a new criterion based on global phase angle is also proposed for more effective performance of SMCR. It takes full advantage of data structure, that is to say, a sequential change with respect to a perturbation can be considered in SMCR with the criterion. To demonstrate its potential, SMCR by PSO is applied to concentration-dependent near-infrared (NIR) spectra of mixture solutions of oleic acid (OA) and ethanol. Its curve resolution performances are compared with SMCR with evolving factor analysis (EFA). The results show that SMCR by PSO yields significantly better curve resolution performances than those by EFA. It is revealed that SMCR by PSO is less sensitive to a local minimum in SMCR and it can be a new effective tool for curve resolution analysis.
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective. PMID:25152929
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.
Zhang, Bing; Sun, Xu; Gao, Lian-Ru; Yang, Li-Na
2011-09-01
For the inaccuracy of endmember extraction caused by abnormal noises of data during the mixed pixel decomposition process, particle swarm optimization (PSO), a swarm intelligence algorithm was introduced and improved in the present paper. By re-defining the position and velocity representation and data updating strategies, the algorithm of discrete particle swarm optimization (D-PSO) was proposed, which made it possible to search resolutions in discrete space and ultimately resolve combinatorial optimization problems. In addition, by defining objective functions and feasible solution spaces, endmember extraction was converted to combinatorial optimization problem, which can be resolved by D-PSO. After giving the detailed flow of applying D-PSO to endmember extraction and experiments based on simulative data and real data, it has been verified the algorithm's flexibility to handle data with abnormal noise and the reliability of endmember extraction were verified. Furthermore, the influence of different parameters on the algorithm's performances was analyzed thoroughly.
Prediction of O-glycosylation Sites Using Random Forest and GA-Tuned PSO Technique.
Hassan, Hebatallah; Badr, Amr; Abdelhalim, M B
2015-01-01
O-glycosylation is one of the main types of the mammalian protein glycosylation; it occurs on the particular site of serine (S) or threonine (T). Several O-glycosylation site predictors have been developed. However, a need to get even better prediction tools remains. One challenge in training the classifiers is that the available datasets are highly imbalanced, which makes the classification accuracy for the minority class to become unsatisfactory. In our previous work, we have proposed a new classification approach, which is based on particle swarm optimization (PSO) and random forest (RF); this approach has considered the imbalanced dataset problem. The PSO parameters setting in the training process impacts the classification accuracy. Thus, in this paper, we perform parameters optimization for the PSO algorithm, based on genetic algorithm, in order to increase the classification accuracy. Our proposed genetic algorithm-based approach has shown better performance in terms of area under the receiver operating characteristic curve against existing predictors. In addition, we implemented a glycosylation predictor tool based on that approach, and we demonstrated that this tool could successfully identify candidate glycosylation sites in case study protein.
Luitel, Bipul; Venayagamoorthy, Ganesh Kumar
2010-06-01
Training a single simultaneous recurrent neural network (SRN) to learn all outputs of a multiple-input-multiple-output (MIMO) system is a difficult problem. A new training algorithm developed from combined concepts of swarm intelligence and quantum principles is presented. The training algorithm is called particle swarm optimization with quantum infusion (PSO-QI). To improve the effectiveness of learning, a two-step learning approach is introduced in the training. The objective of the learning in the first step is to find the optimal set of weights in the SRN considering all output errors. In the second step, the objective is to maximize the learning of each output dynamics by fine tuning the respective SRN output weights. To demonstrate the effectiveness of the PSO-QI training algorithm and the two-step learning approach, two examples of an SRN learning MIMO systems are presented. The first example is learning a benchmark MIMO system and the second one is the design of a wide area monitoring system for a multimachine power system. From the results, it is observed that SRNs can effectively learn MIMO systems when trained using the PSO-QI algorithm and the two-step learning approach.
Trajectory planning of free-floating space robot using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Walter, Ulrich
2015-07-01
This paper investigates the application of Particle Swarm Optimization (PSO) strategy to trajectory planning of the kinematically redundant space robot in free-floating mode. Due to the path dependent dynamic singularities, the volume of available workspace of the space robot is limited and enormous joint velocities are required when such singularities are met. In order to overcome this effect, the direct kinematics equations in conjunction with PSO are employed for trajectory planning of free-floating space robot. The joint trajectories are parametrized with the Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) redundant manipulator mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes
NASA Astrophysics Data System (ADS)
Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.
2016-03-01
One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.
Improving a HMM-based off-line handwriting recognition system using MME-PSO optimization
NASA Astrophysics Data System (ADS)
Hamdani, Mahdi; El Abed, Haikal; Hamdani, Tarek M.; Märgner, Volker; Alimi, Adel M.
2011-01-01
One of the trivial steps in the development of a classifier is the design of its architecture. This paper presents a new algorithm, Multi Models Evolvement (MME) using Particle Swarm Optimization (PSO). This algorithm is a modified version of the basic PSO, which is used to the unsupervised design of Hidden Markov Model (HMM) based architectures. For instance, the proposed algorithm is applied to an Arabic handwriting recognizer based on discrete probability HMMs. After the optimization of their architectures, HMMs are trained with the Baum- Welch algorithm. The validation of the system is based on the IfN/ENIT database. The performance of the developed approach is compared to the participating systems at the 2005 competition organized on Arabic handwriting recognition on the International Conference on Document Analysis and Recognition (ICDAR). The final system is a combination between an optimized HMM with 6 other HMMs obtained by a simple variation of the number of states. An absolute improvement of 6% of word recognition rate with about 81% is presented. This improvement is achieved comparing to the basic system (ARAB-IfN). The proposed recognizer outperforms also most of the known state-of-the-art systems.
Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.
2015-07-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
Optimization of the shell compliance by the thickness changing using PSO
NASA Astrophysics Data System (ADS)
Szczepanik, M.; Poteralski, A.; Kalinowski, M.
2016-11-01
The paper is devoted to an application of the particle swarm methods and the finite element method to the optimization of 2-D structures (plane stress/strain, bending plates and shells). The shape, topology and material or thickness of the structures are optimised for the stress, strain and volume criteria. The numerical examples demonstrate that the method based on particle swarm computation is an effective technique for solving computer aided optimal design.
Inversion of residual gravity anomalies using tuned PSO
NASA Astrophysics Data System (ADS)
Roshan, Ravi; Singh, Upendra Kumar
2017-02-01
Many kinds of particle swarm optimization (PSO) techniques are now available and various efforts have been made to solve linear and non-linear problems as well as one-dimensional and multi-dimensional problems of geophysical data. Particle swarm optimization is a metaheuristic optimization method that requires intelligent guesswork and a suitable selection of controlling parameters (i.e. inertia weight and acceleration coefficient) for better convergence at global minima. The proposed technique, tuned PSO, is an improved technique of PSO, in which efforts have been made to choose the controlling parameters, and these parameters have been selected after analysing the responses of various possible exercises using synthetic gravity anomalies over various geological sources. The applicability and efficacy of the proposed method is tested and validated using synthetic gravity anomalies over various source geometries. Finally, tuned PSO is applied over field residual gravity anomalies of two different geological terrains to find the model parameters, namely amplitude coefficient factor (A), shape factor (q) and depth (z). The analysed results have been compared with published results obtained by different methods that show a significantly excellent agreement with real model parameters. The results also show that the proposed approach is not only superior to the other methods but also that the strategy has enhanced the exploration capability of the proposed method. Thus tuned PSO is an efficient and more robust technique to achieve an optimal solution with minimal error.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
Hybrid PID and PSO-based control for electric power assist steering system for electric vehicle
NASA Astrophysics Data System (ADS)
Hanifah, R. A.; Toha, S. F.; Ahmad, S.
2013-12-01
Electric power assist steering (EPAS) system provides an important significance in enhancing the driving performance of a vehicle with its energy-conserving features. This paper presents a hybrid PID (Proportional-Integral-Derivative) and particle swarm optimization (PSO) based control scheme to minimize energy consumption for EPAS. This single objective optimization scheme is realized using the PSO technique in searching for best gain parameters of the PID controller. The fast tuning feature of this optimum PID controller produced high-quality solutions. Simulation results show the performance and effectiveness of the hybrid PSO-PID based controller as opposed to the conventional PID controller.
NASA Astrophysics Data System (ADS)
Zhang, Enlai; Hou, Liang; Shen, Chao; Shi, Yingliang; Zhang, Yaxiang
2016-01-01
To better solve the complex non-linear problem between the subjective sound quality evaluation results and objective psychoacoustics parameters, a method for the prediction of the sound quality is put forward by using a back propagation neural network (BPNN) based on particle swarm optimization (PSO), which is optimizing the initial weights and thresholds of BP network neurons through the PSO. In order to verify the effectiveness and accuracy of this approach, the noise signals of the B-Class vehicles from the idle speed to 120 km h-1 measured by the artificial head, are taken as a target. In addition, this paper describes a subjective evaluation experiment on the sound quality annoyance inside the vehicles through a grade evaluation method, by which the annoyance of each sample is obtained. With the use of Artemis software, the main objective psychoacoustic parameters of each noise sample are calculated. These parameters include loudness, sharpness, roughness, fluctuation, tonality, articulation index (AI) and A-weighted sound pressure level. Furthermore, three evaluation models with the same artificial neural network (ANN) structure are built: the standard BPNN model, the genetic algorithm-back-propagation neural network (GA-BPNN) model and the PSO-back-propagation neural network (PSO-BPNN) model. After the network training and the evaluation prediction on the three models’ network based on experimental data, it proves that the PSO-BPNN method can achieve convergence more quickly and improve the prediction accuracy of sound quality, which can further lay a foundation for the control of the sound quality inside vehicles.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm
NASA Astrophysics Data System (ADS)
Astuty; Haryono, T.
2016-04-01
Transmission expansion planning (TEP) is one of the issue that have to be faced caused by addition of large scale power generation into the existing power system. Optimization need to be conducted to get optimal solution technically and economically. Several mathematic methods have been applied to provide optimal allocation of new transmission line such us genetic algorithm, particle swarm optimization and tabu search. This paper proposed novel binary particle swarm optimization (NBPSO) to determine which transmission line should be added to the existing power system. There are two scenerios in this simulation. First, considering transmission power losses and the second is regardless transmission power losses. NBPSO method successfully obtain optimal solution in short computation time. Compare to the first scenario, the number of new line in second scenario which regardless power losses is less but produces high power losses that cause the cost becoming extremely expensive.
NASA Astrophysics Data System (ADS)
Rama Mohan Rao, A.; Anandakumar, Ganesh
2007-12-01
Setting up a health monitoring system for large-scale civil engineering structures requires a large number of sensors and the placement of these sensors is of great significance for such spatially separated large structures. In this paper, we present an optimal sensor placement (OSP) algorithm by treating OSP as a combinatorial optimization problem which is solved using a swarm intelligence technique called particle swarm optimization (PSO). We propose a new hybrid PSO algorithm by combining a self-configurable PSO with the Nelder-Mead algorithm to solve this rather difficult combinatorial problem of OSP. The proposed algorithm aims precisely to achieve the best identification of modal frequencies and mode shapes. Numerical experiments have been carried out by considering civil engineering structures to evaluate the performance of the proposed swarm-intelligence-based OSP algorithm. Numerical studies indicate that the proposed hybrid PSO algorithm generates sensor configurations superior to the conventional iterative information-based approaches which have been popularly used for large structures. Further, the proposed hybrid PSO algorithm exhibits superior convergence characteristics when compared to other PSO counterparts.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Optimization of Damavand Tokamak Poloidal Field Coils Positions and Currents with PSO Algorithm
NASA Astrophysics Data System (ADS)
Mohammadi, M.; Dini, F.; Amrollahi, R.
2012-04-01
In order to maintain equilibrium in small or large tokamaks poloidal field coils are utilized, since the function of the poloidal magnetic field is a complex function of current density and the position of the coils, a change in any of the parameters can have a strong effect in the confinement and the magnetohydrodynamic parameters. On the other hand, considering the continuity of the current and the position of the coils, the space being searched is so big that taking all possible conditions into account becomes practically impossible. So a method should be utilized that is able to optimize the position and current of the coils without searching the whole space. This paper seeks to find a new method of deriving the plasma parameter in which a combination of the two methods of neural network and Particles Swarm Optimization is used in order to optimize the position and current of poloidal field coils in Damavand tokamak. Since in the employed methods no special topology is applied, it can be readily used to study any other tokamak.
NASA Astrophysics Data System (ADS)
Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela
2016-01-01
Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.
Mekhmoukh, Abdenour; Mokrani, Karim
2015-11-01
In this paper, a new image segmentation method based on Particle Swarm Optimization (PSO) and outlier rejection combined with level set is proposed. A traditional approach to the segmentation of Magnetic Resonance (MR) images is the Fuzzy C-Means (FCM) clustering algorithm. The membership function of this conventional algorithm is sensitive to the outlier and does not integrate the spatial information in the image. The algorithm is very sensitive to noise and in-homogeneities in the image, moreover, it depends on cluster centers initialization. To improve the outlier rejection and to reduce the noise sensitivity of conventional FCM clustering algorithm, a novel extended FCM algorithm for image segmentation is presented. In general, in the FCM algorithm the initial cluster centers are chosen randomly, with the help of PSO algorithm the clusters centers are chosen optimally. Our algorithm takes also into consideration the spatial neighborhood information. These a priori are used in the cost function to be optimized. For MR images, the resulting fuzzy clustering is used to set the initial level set contour. The results confirm the effectiveness of the proposed algorithm.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
Fuzzified PSO Algorithm for OPF with FACTS Devices in Interconnected Power Systems
NASA Astrophysics Data System (ADS)
Jothi Swaroopan, N. M.; Somasundaram, P.
This paper presents a new computationally efficient improved stochastic algorithm for solving Optimal Power Flow (OPF) in interconnected power systems with FACTS devices. This proposed technique is based on the combined application of Fuzzy logic strategy incorporated in Particle Swarm Optimization (PSO) algorithm, hence named as Fuzzified PSO (FPSO). The FACTS devices considered here include Static Var Compensator (SVC), Static Synchronous Compensator (STATCOM), Thyristor Controlled Series Capacitor (TCSC) and Unified Power Flow Controller (UPFC). The proposed method is tested on single area IEEE 30-bus system and interconnected two area systems. The optimal solutions obtained using Evolutionary Programming (EP), PSO and FPSO are compared and analyzed. The analysis reveals that the proposed algorithm is relatively simple, efficient and reliable.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer.
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
NASA Astrophysics Data System (ADS)
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.
Comparative analysis of PSO algorithms for PID controller tuning
NASA Astrophysics Data System (ADS)
Štimac, Goranka; Braut, Sanjin; Žigulić, Roberto
2014-09-01
The active magnetic bearing(AMB) suspends the rotating shaft and maintains it in levitated position by applying controlled electromagnetic forces on the rotor in radial and axial directions. Although the development of various control methods is rapid, PID control strategy is still the most widely used control strategy in many applications, including AMBs. In order to tune PID controller, a particle swarm optimization(PSO) method is applied. Therefore, a comparative analysis of particle swarm optimization(PSO) algorithms is carried out, where two PSO algorithms, namely (1) PSO with linearly decreasing inertia weight(LDW-PSO), and (2) PSO algorithm with constriction factor approach(CFA-PSO), are independently tested for different PID structures. The computer simulations are carried out with the aim of minimizing the objective function defined as the integral of time multiplied by the absolute value of error(ITAE). In order to validate the performance of the analyzed PSO algorithms, one-axis and two-axis radial rotor/active magnetic bearing systems are examined. The results show that PSO algorithms are effective and easily implemented methods, providing stable convergence and good computational efficiency of different PID structures for the rotor/AMB systems. Moreover, the PSO algorithms prove to be easily used for controller tuning in case of both SISO and MIMO system, which consider the system delay and the interference among the horizontal and vertical rotor axes.
NASA Astrophysics Data System (ADS)
Yang, Yue; Wen, Jian; Chen, Xiaofei
2015-07-01
In this paper, we apply particle swarm optimization (PSO), an artificial intelligence technique, to velocity calibration in microseismic monitoring. We ran simulations with four 1-D layered velocity models and three different initial model ranges. The results using the basic PSO algorithm were reliable and accurate for simple models, but unsuccessful for complex models. We propose the staged shrinkage strategy (SSS) for the PSO algorithm. The SSS-PSO algorithm produced robust inversion results and had a fast convergence rate. We investigated the effects of PSO's velocity clamping factor in terms of the algorithm reliability and computational efficiency. The velocity clamping factor had little impact on the reliability and efficiency of basic PSO, whereas it had a large effect on the efficiency of SSS-PSO. Reassuringly, SSS-PSO exhibits marginal reliability fluctuations, which suggests that it can be confidently implemented.
Utilization of PSO algorithm in estimation of water level change of Lake Beysehir
NASA Astrophysics Data System (ADS)
Buyukyildiz, Meral; Tezel, Gulay
2015-12-01
In this study, unlike backpropagation algorithm which gets local best solutions, the usefulness of particle swarm optimization (PSO) algorithm, a population-based optimization technique with a global search feature, inspired by the behavior of bird flocks, in determination of parameters of support vector machines (SVM) and adaptive network-based fuzzy inference system (ANFIS) methods was investigated. For this purpose, the performances of hybrid PSO-ɛ support vector regression (PSO-ɛSVR) and PSO-ANFIS models were studied to estimate water level change of Lake Beysehir in Turkey. The change in water level was also estimated using generalized regression neural network (GRNN) method, an iterative training procedure. Root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R 2) were used to compare the obtained results. Efforts were made to estimate water level change (L) using different input combinations of monthly inflow-lost flow (I), precipitation (P), evaporation (E), and outflow (O). According to the obtained results, the other methods except PSO-ANN generally showed significantly similar performances to each other. PSO-ɛSVR method with the values of minMAE = 0.0052 m, maxMAE = 0.04 m, and medianMAE = 0.0198 m; minRMSE = 0.0070 m, maxRMSE = 0.0518 m, and medianRMSE = 0.0241 m; minR 2 = 0.9169, maxR 2 = 0.9995, medianR 2 = 0.9909 for the I-P-E-O combination in testing period became superior in forecasting water level change of Lake Beysehir than the other methods. PSO-ANN models were the least successful models in all combinations.
Supervised hybrid feature selection based on PSO and rough sets for medical diagnosis.
Inbarani, H Hannah; Azar, Ahmad Taher; Jothi, G
2014-01-01
Medical datasets are often classified by a large number of disease measurements and a relatively small number of patient records. All these measurements (features) are not important or irrelevant/noisy. These features may be especially harmful in the case of relatively small training sets, where this irrelevancy and redundancy is harder to evaluate. On the other hand, this extreme number of features carries the problem of memory usage in order to represent the dataset. Feature Selection (FS) is a solution that involves finding a subset of prominent features to improve predictive accuracy and to remove the redundant features. Thus, the learning model receives a concise structure without forfeiting the predictive accuracy built by using only the selected prominent features. Therefore, nowadays, FS is an essential part of knowledge discovery. In this study, new supervised feature selection methods based on hybridization of Particle Swarm Optimization (PSO), PSO based Relative Reduct (PSO-RR) and PSO based Quick Reduct (PSO-QR) are presented for the diseases diagnosis. The experimental result on several standard medical datasets proves the efficiency of the proposed technique as well as enhancements over the existing feature selection techniques.
Spectrophotometric determination of synthetic colorants using PSO-GA-ANN.
Benvidi, Ali; Abbasi, Saleheh; Gharaghani, Sajjad; Dehghan Tezerjani, Marzieh; Masoum, Saeed
2017-04-01
Four common food colorants, containing tartrazine, sunset yellow, ponceau 4R and methyl orange, are simultaneously quantified without prior chemical separation. In this study, an effective artificial neural network (ANN) method is designed for modeling multicomponent absorbance data with the presence of shifts or changes of peak shapes in spectroscopic analysis. Gradient descent methods such as Levenberg-Marquardt function are usually used to determine the parameters of ANN. However, these methods may provide inappropriate parameters. In this paper, we propose combination of genetic algorithms (GA) and partial swarm optimization (PSO) to optimize parameters of ANN, and then the algorithm is used to process the relationship between the absorbance data and the concentration of analytes. The hybrid algorithm has the benefits of both PSO and GA techniques. The performance of this algorithm is compared to the performance of PSO-ANN, PC-ANN and ANN based Levenberg-Marquardt function. The obtained results revealed that the designed model can accurately determine colorant concentrations in real and synthetic samples. According to the observations, it is clear that the proposed hybrid method is a powerful tool to estimate the concentration of food colorants with a high degree of overlap using nonlinear artificial neural network.
Distributed Adaptive Particle Swarm Optimizer in Dynamic Environment
Cui, Xiaohui; Potok, Thomas E
2007-01-01
In the real world, we have to frequently deal with searching and tracking an optimal solution in a dynamical and noisy environment. This demands that the algorithm not only find the optimal solution but also track the trajectory of the changing solution. Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique, which can find an optimal, or near optimal, solution to a numerical and qualitative problem. In PSO algorithm, the problem solution emerges from the interactions between many simple individual agents called particles, which make PSO an inherently distributed algorithm. However, the traditional PSO algorithm lacks the ability to track the optimal solution in a dynamic and noisy environment. In this paper, we present a distributed adaptive PSO (DAPSO) algorithm that can be used for tracking a non-stationary optimal solution in a dynamically changing and noisy environment.
A Novel Modification of PSO Algorithm for SML Estimation of DOA
Chen, Haihua; Li, Shibao; Liu, Jianhang; Liu, Fen; Suzuki, Masakiyo
2016-01-01
This paper addresses the issue of reducing the computational complexity of Stochastic Maximum Likelihood (SML) estimation of Direction-of-Arrival (DOA). The SML algorithm is well-known for its high accuracy of DOA estimation in sensor array signal processing. However, its computational complexity is very high because the estimation of SML criteria is a multi-dimensional non-linear optimization problem. As a result, it is hard to apply the SML algorithm to real systems. The Particle Swarm Optimization (PSO) algorithm is considered as a rather efficient method for multi-dimensional non-linear optimization problems in DOA estimation. However, the conventional PSO algorithm suffers two defects, namely, too many particles and too many iteration times. Therefore, the computational complexity of SML estimation using conventional PSO algorithm is still a little high. To overcome these two defects and to reduce computational complexity further, this paper proposes a novel modification of the conventional PSO algorithm for SML estimation and we call it Joint-PSO algorithm. The core idea of the modification lies in that it uses the solution of Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) and stochastic Cramer-Rao bound (CRB) to determine a novel initialization space. Since this initialization space is already close to the solution of SML, fewer particles and fewer iteration times are needed. As a result, the computational complexity can be greatly reduced. In simulation, we compare the proposed algorithm with the conventional PSO algorithm, the classic Altering Minimization (AM) algorithm and Genetic algorithm (GA). Simulation results show that our proposed algorithm is one of the most efficient solving algorithms and it shows great potential for the application of SML in real systems. PMID:27999377
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness
A Novel Modification of PSO Algorithm for SML Estimation of DOA.
Chen, Haihua; Li, Shibao; Liu, Jianhang; Liu, Fen; Suzuki, Masakiyo
2016-12-19
This paper addresses the issue of reducing the computational complexity of Stochastic Maximum Likelihood (SML) estimation of Direction-of-Arrival (DOA). The SML algorithm is well-known for its high accuracy of DOA estimation in sensor array signal processing. However, its computational complexity is very high because the estimation of SML criteria is a multi-dimensional non-linear optimization problem. As a result, it is hard to apply the SML algorithm to real systems. The Particle Swarm Optimization (PSO) algorithm is considered as a rather efficient method for multi-dimensional non-linear optimization problems in DOA estimation. However, the conventional PSO algorithm suffers two defects, namely, too many particles and too many iteration times. Therefore, the computational complexity of SML estimation using conventional PSO algorithm is still a little high. To overcome these two defects and to reduce computational complexity further, this paper proposes a novel modification of the conventional PSO algorithm for SML estimation and we call it Joint-PSO algorithm. The core idea of the modification lies in that it uses the solution of Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) and stochastic Cramer-Rao bound (CRB) to determine a novel initialization space. Since this initialization space is already close to the solution of SML, fewer particles and fewer iteration times are needed. As a result, the computational complexity can be greatly reduced. In simulation, we compare the proposed algorithm with the conventional PSO algorithm, the classic Altering Minimization (AM) algorithm and Genetic algorithm (GA). Simulation results show that our proposed algorithm is one of the most efficient solving algorithms and it shows great potential for the application of SML in real systems.
NASA Astrophysics Data System (ADS)
Zhan, Liwei; Li, Chengwei
2017-02-01
A hybrid PSO-SVM-based model is proposed to predict the friction coefficient between aircraft tire and coating. The presented hybrid model combines a support vector machine (SVM) with particle swarm optimization (PSO) technique. SVM has been adopted to solve regression problems successfully. Its regression accuracy is greatly related to optimizing parameters such as the regularization constant C , the parameter gamma γ corresponding to RBF kernel and the epsilon parameter \\varepsilon in the SVM training procedure. However, the friction coefficient which is predicted based on SVM has yet to be explored between aircraft tire and coating. The experiment reveals that drop height and tire rotational speed are the factors affecting friction coefficient. Bearing in mind, the friction coefficient can been predicted using the hybrid PSO-SVM-based model by the measured friction coefficient between aircraft tire and coating. To compare regression accuracy, a grid search (GS) method and a genetic algorithm (GA) are used to optimize the relevant parameters (C , γ and \\varepsilon ), respectively. The regression accuracy could be reflected by the coefficient of determination ({{R}2} ). The result shows that the hybrid PSO-RBF-SVM-based model has better accuracy compared with the GS-RBF-SVM- and GA-RBF-SVM-based models. The agreement of this model (PSO-RBF-SVM) with experiment data confirms its good performance.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Early Mission Design of Transfers to Halo Orbits via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.; Spencer, David B.; Hart, Terry J.
2016-06-01
Particle Swarm Optimization (PSO) is used to prune the search space of a low-thrust trajectory transfer from a high-altitude, Earth orbit to a Lagrange point orbit in the Earth-Moon system. Unlike a gradient based approach, this evolutionary PSO algorithm is capable of avoiding undesirable local minima. The PSO method is extended to a "local" version and uses a two dimensional search space that is capable of reducing the computation run-time by an order of magnitude when compared with published work. A technique for choosing appropriate PSO parameters is demonstrated and an example of an optimized trajectory is discussed.
Structured Robust Loop shaping control for HIMAT System Using PSO
NASA Astrophysics Data System (ADS)
Kaitwanidvilai, Somyot; Jangwanitlert, Anuwat; Parnichkun, Manukid
2009-01-01
Robust loop shaping control is a feasible method for designing a robust controller; however, the controller designed by this method is complicated and difficult to implement practically. To overcome this problem, in this paper, a new design technique of a fixed-structure robust loop shaping controller for a highly maneuverable airplane, HIMAT, is proposed. The performance and robust stability conditions of the designed system satisfying H∞ loop shaping control are formulated as the objective function in the optimization problem. Particle Swarm Optimization (PSO) technique is adopted to solve this problem and to achieve the control parameters of the proposed controller. Simulation results demonstrate that the proposed approach is numerically efficient and leads to performance comparable to that of the other method.
NASA Astrophysics Data System (ADS)
Tofighi, Elham; Mahdizadeh, Amin
2016-09-01
This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.
pso@autodock: a fast flexible molecular docking program based on Swarm intelligence.
Namasivayam, Vigneshwaran; Günther, Robert
2007-12-01
On the quest of novel therapeutics, molecular docking methods have proven to be valuable tools for screening large libraries of compounds determining the interactions of potential drugs with the target proteins. A widely used docking approach is the simulation of the docking process guided by a binding energy function. On the basis of the molecular docking program autodock, we present pso@autodock as a tool for fast flexible molecular docking. Our novel Particle Swarm Optimization (PSO) algorithms varCPSO and varCPSO-ls are suited for rapid docking of highly flexible ligands. Thus, a ligand with 23 rotatable bonds was successfully docked within as few as 100 000 computing steps (rmsd = 0.87 A), which corresponds to only 10% of the computing time demanded by autodock. In comparison to other docking techniques as gold 3.0, dock 6.0, flexx 2.2.0, autodock 3.05, and sodock, pso@autodock provides the smallest rmsd values for 12 in 37 protein-ligand complexes. The average rmsd value of 1.4 A is significantly lower then those obtained with the other docking programs, which are all above 2.0 A. Thus, pso@autodock is suggested as a highly efficient docking program in terms of speed and quality for flexible peptide-protein docking and virtual screening studies.
Multi-variable control of chaos using PSO-based minimum entropy control
NASA Astrophysics Data System (ADS)
Sadeghpour, Mehdi; Salarieh, Hassan; Vossoughi, Gholamreza; Alasty, Aria
2011-06-01
The minimum entropy (ME) control is a chaos control technique which causes chaotic behavior to vanish by stabilizing unstable periodic orbits of the system without using mathematical model of the system. In this technique some controller type, normally delayed feedback controller, with an adjustable parameter such as feedback gain is used. The adjustable parameter is determined such that the entropy of the system is minimized. Proposed in this paper is the PSO-based multi-variable ME control. In this technique two or more control parameters are adjusted concurrently either in a single or in multiple control inputs. Thus it is possible to use two or more feedback terms in the delayed feedback controller and adjust their gains. Also the multi-variable ME control can be used in multi-input systems. The minimizing engine in this technique is the particle swarm optimizer. Using online PSO, the PSO-based multi-variable ME control technique is applied to stabilize the 1-cycle fixed points of the Logistic map, the Hénon map, and the chaotic Duffing system. The results exhibit good effectiveness and performance of this controller.
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering.
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed.
NASA Astrophysics Data System (ADS)
Sameen, Maher Ibrahim; Pradhan, Biswajeet
2016-06-01
In this study, we propose a novel built-up spectral index which was developed by using particle-swarm-optimization (PSO) technique for Worldview-2 images. PSO was used to select the relevant bands from the eight (8) spectral bands of Worldview-2 image and then were used for index development. Multiobiective optimization was used to minimize the number of selected spectral bands and to maximize the classification accuracy. The results showed that the most important and relevant spectral bands among the eight (8) bands for built-up area extraction are band4 (yellow) and band7 (NIR1). Using those relevant spectral bands, the final spectral index was form ulated by developing a normalized band ratio. The validation of the classification result using the proposed spectral index showed that our novel spectral index performs well compared to the existing WV -BI index. The accuracy assessment showed that the new proposed spectral index could extract built-up areas from Worldview-2 image with an area under curve (AUC) of (0.76) indicating the effectiveness of the developed spectral index. Further improvement could be done by using several datasets during the index development process to ensure the transferability of the index to other datasets and study areas.
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
A modified PSO structure resulting in high exploration ability with convergence guaranteed.
Chen, Xin; Li, Yangmin
2007-10-01
Particle swarm optimization (PSO) is a population-based stochastic recursion procedure, which simulates the social behavior of a swarm of ants or a school of fish. Based upon the general representation of individual particles, this paper introduces a decreasing coefficient to the updating principle, so that PSO can be viewed as a regular stochastic approximation algorithm. To improve exploration ability, a random velocity is added to the velocity updating in order to balance exploration behavior and convergence rate with respect to different optimization problems. To emphasize the role of this additional velocity, the modified PSO paradigm is named PSO with controllable random exploration velocity (PSO-CREV). Its convergence is proved using Lyapunov theory on stochastic process. From the proof, some properties brought by the stochastic components are obtained such as "divergence before convergence" and "controllable exploration." Finally, a series of benchmarks is proposed to verify the feasibility of PSO-CREV.
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.
Technique and staining optimization leucoconcentration.
Pierrez, J; Guerci, A; Guerci, O
1987-09-01
In cytometric clinical application, it is important to obtain cell suspensions rapidly with as little cytological alteration as possible. A procedure has been achieved to prepare cell suspensions for flow cytometric analysis. The leucoconcentration technique, first described by Herbeuval for cytologic analysis, has been modified to be applied in cytometry. This technique involves Saponin lysis of red cells of peripheral blood or bone marrow samples that have been previously fixed with picric acid alcohol solution. Cells in suspension are not shifted and tinctorial affinity is not modified. Then cells have been stained with Mithramycin. Each parameter defined by Crissman has been analyzed to define the best staining conditions. The availability of Leucoconcentration with Mithramycin-DNA-staining permits determination of cell cycle with a fine resolution.
Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique
NASA Astrophysics Data System (ADS)
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2014-01-01
The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
NASA Astrophysics Data System (ADS)
Jain, Narender Kumar; Nangia, Uma; Jain, Aishwary
2016-06-01
In this paper, multiobjective economic load dispatch (MELD) problem considering generation cost and transmission losses has been formulated using priority goal programming (PGP) technique. In this formulation, equality constraint has been considered by inclusion of penalty parameter K. It has been observed that fixing its value to 1,000 keeps the equality constraint within limits. The non-inferior set for IEEE 5, 14 and 30-bus systems has been generated by Particle Swarm Optimization (PSO) technique. The best compromise solution has been chosen as the one which gives equal percentage saving for both the objectives.
A non-linear UAV altitude PSO-PD control
NASA Astrophysics Data System (ADS)
Orlando, Calogero
2015-12-01
In this work, a nonlinear model based approach is presented for the altitude stabilization of a hexarotor unmanned aerial vehicle (UAV). The mathematical model and control of the hexacopter airframe is presented. To stabilize the system along the vertical direction, a Proportional Derivative (PD) control is taken into account. A particle swarm optimization (PSO) approach is used in this paper to select the optimal parameters of the control algorithm taking into account different objective functions. Simulation sets are performed to carry out the results for the non-linear system to show how the PSO tuned PD controller leads to zero the error of the position along Z earth direction.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer.
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Chen, Shyi-Ming; Hsin, Wen-Chyuan
2015-07-01
In this paper, we propose a new weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on the slopes of fuzzy sets. We also propose a particle swarm optimization (PSO)-based weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of fuzzy rules for weighted fuzzy interpolative reasoning. We apply the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm to deal with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. The experimental results show that the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm outperforms the existing methods for dealing with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems.
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
Multiobjective optimization techniques for structural design
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Optimization of printing techniques for electrochemical biosensors
NASA Astrophysics Data System (ADS)
Zainuddin, Ahmad Anwar; Mansor, Ahmad Fairuzabadi Mohd; Rahim, Rosminazuin Ab; Nordin, Anis Nurashikin
2017-03-01
Electrochemical biosensors show great promise for point-of-care applications due to their low cost, portability and compatibility with microfluidics. The miniature size of these sensors provides advantages in terms of sensitivity, specificity and allows them to be mass produced in arrays. The most reliable fabrication technique for these sensors is lithography followed by metal deposition using sputtering or chemical vapor deposition techniques. This technique which is usually done in the cleanroom requires expensive masking followed by deposition. Recently, cheaper printing techniques such as screen-printing and ink-jet printing have become popular due to its low cost, ease of fabrication and mask-less method. In this paper, two different printing techniques namely inkjet and screen printing are demonstrated for an electrochemical biosensor. For ink-jet printing technique, optimization of key printing parameters, such as pulse voltages, drop spacing and waveform setting, in-house temperature and cure annealing for obtaining the high quality droplets, are discussed. These factors are compared with screen-printing parameters such as mesh size, emulsion thickness, minimum spacing of lines and curing times. The reliability and reproducibility of the sensors are evaluated using scotch tape test, resistivity and profile-meter measurements. It was found that inkjet printing is superior because it is mask-less, has minimum resolution of 100 µm compared to 200 µm for screen printing and higher reproducibility rate of 90% compared to 78% for screen printing.
Position control of nonlinear hydraulic system using an improved PSO based PID controller
NASA Astrophysics Data System (ADS)
Ye, Yi; Yin, Chen-Bo; Gong, Yue; Zhou, Jun-jing
2017-01-01
This paper addresses the position control of valve-controlled cylinder system employed in hydraulic excavator. Nonlinearities such as dead zone, saturation, discharge coefficient and friction existed in the system are highlighted during the mathematical modeling. On this basis, simulation model is established and then validated against experiments. Aim for achieving excellent position control performances, an improved particle swarm optimization (PSO) algorithm is presented to search for the optimal proportional-integral-derivative (PID) controller gains for the nonlinear hydraulic system. The proposed algorithm is a hybrid based on the standard PSO algorithm but with the addition of selection and crossover operators from genetic algorithm in order to enhance the searching efficiency. Furthermore, a nonlinear decreasing scheme for the inertia weight of the improved PSO algorithm is adopted to balance global exploration and local exploration abilities of particles. Then a co-simulation platform combining the simulation model with the improved PSO tuning based PID controller is developed. Comparisons of the improved PSO, standard PSO and Phase Margin (PM) tuning methods are carried out with three position references as step signal, ramp signal and sinusoidal wave using the co-simulation platform. The results demonstrated that the improved PSO algorithm can perform well in PID control for positioning of nonlinear hydraulic system.
Optimal piezoelectric switching technique for vibration damping
NASA Astrophysics Data System (ADS)
Neubauer, Marcus; Oleskiewicz, Robert
2007-04-01
This paper describes piezoelectric switching techniques for vibration damping. The dynamical behaviour of a piezoceramics connected to a switching LR shunt and the dissipated energy are obtained using a fundamental piezoelectric model. All calculations are performed in a normalized way and highlight the influence of the electromechanical coupling coefficient of the piezoceramics and the shunt parameters. For the first time, a precise result for the dynamics of a shunted piezoceramics is derived. The analytic results are used to determine the optimal switching sequence and external branch parameters in order to maximize the damping performance. The results are validated by measurements of a clamped beam.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Cache Energy Optimization Techniques For Modern Processors
Mittal, Sparsh
2013-01-01
and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.
Performance of Multi-chaotic PSO on a shifted benchmark functions set
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition.
Mistry, Kamlesh; Zhang, Li; Neoh, Siew Chin; Lim, Chee Peng; Fielding, Ben
2016-04-21
This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality. PMID:26989404
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
A PSO-Based Approach for Pathway Marker Identification From Gene Expression Data.
Mandal, Monalisa; Mondal, Jyotirmay; Mukhopadhyay, Anirban
2015-09-01
In this article, a new and robust pathway activity inference scheme is proposed from gene expression data using Particle Swarm Optimization (PSO). From microarray gene expression data, the corresponding pathway information of the genes are collected from a public database. For identifying the pathway markers, the expression values of each pathway consisting of genes, termed as pathway activity, are summarized. To measure the goodness of a pathway activity vector, t-score is widely used in the existing literature. The weakness of existing techniques for inferring pathway activity is that they intend to consider all the member genes of a pathway. But in reality, all the member genes may not be significant to the corresponding pathway. Therefore, those genes, which are responsible in the corresponding pathway, should be included only. Motivated by this, in the proposed method, using PSO, important genes with respect to each pathway are identified. The objective is to maximize the average t-score. For the pathway activities inferred from different percentage of significant pathways, the average absolute t -scores are plotted. In addition, the top 50% pathway markers are evaluated using 10-fold cross validation and its performance is compared with that of other existing techniques. Biological relevance of the results is also studied.
Techniques for optimizing inerting in electron processors
NASA Astrophysics Data System (ADS)
Rangwalla, I. J.; Korn, D. J.; Nablo, S. V.
1993-07-01
The design of an "inert gas" distribution system in an electron processor must satisfy a number of requirements. The first of these is the elimination or control of beam produced ozone and NO x which can be transported from the process zone by the product into the work area. Since the tolerable levels for O 3 in occupied areas around the processor are <0.1 ppm, good control techniques are required involving either recombination of the O 3 in the beam heated process zone, or exhausting and dilution of the gas at the processor exit. The second requirement of the inerting system is to provide a suitable environment for completing efficient, free radical initiated addition polymerization. In this case, the competition between radical loss through de-excitation and that from O 2 quenching must be understood. This group has used gas chromatographic analysis of electron cured coatings to study the trade-offs of delivered dose, dose rate and O 2 concentrations in the process zone to determine the tolerable ranges of parameter excursions can be determined for production quality control purposes. These techniques are described for an ink:coating system on paperboard, where a broad range of process parameters have been studied (D, Ġ, O 2. It is then shown how the technique is used to optimize the use of higher purity (10-100 ppm O 2) nitrogen gas for inerting, in combination with lower purity (2-20, 000 ppm O 2) non-cryogenically produced gas, as from a membrane or pressure swing adsorption generators.
A PSO-PID quaternion model based trajectory control of a hexarotor UAV
NASA Astrophysics Data System (ADS)
Artale, Valeria; Milazzo, Cristina L. R.; Orlando, Calogero; Ricciardello, Angela
2015-12-01
A quaternion based trajectory controller for a prototype of an Unmanned Aerial Vehicle (UAV) is discussed in this paper. The dynamics of the UAV, a hexarotor in details, is described in terms of quaternion instead of the usual Euler angle parameterization. As UAV flight management concerns, the method here implemented consists of two main steps: trajectory and attitude control via Proportional-Integrative-Derivative (PID) and Proportional-Derivative (PD) technique respectively and the application of Particle Swarm Optimization (PSO) method in order to tune the PID and PD parameters. The optimization is the consequence of the minimization of a objective function related to the error with the respect to a proper trajectory. Numerical simulations support and validate the proposed method.
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
Introducing the fractional order robotic Darwinian PSO
NASA Astrophysics Data System (ADS)
Couceiro, Micael S.; Martins, Fernando M. L.; Rocha, Rui P.; Ferreira, Nuno M. F.
2012-11-01
The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm using fractional calculus concepts to control the convergence rate, while considering the robot dynamical characteristics. Moreover, to improve the convergence analysis of the RDPSO, an adjustment of the fractional coefficient based on mobile robot constraints is presented and experimentally assessed with 2 real platforms. Afterwards, this novel fractional-order RDPSO is evaluated in 12 physical robots being further explored using a larger population of 100 simulated mobile robots within a larger scenario. Experimental results show that changing the fractional coefficient does not significantly improve the final solution but presents a significant influence in the convergence time because of its inherent memory property.
FIPSDock: a new molecular docking technique driven by fully informed swarm optimization algorithm.
Liu, Yu; Zhao, Lei; Li, Wentao; Zhao, Dongyu; Song, Miao; Yang, Yongliang
2013-01-05
The accurate prediction of protein-ligand binding is of great importance for rational drug design. We present herein a novel docking algorithm called as FIPSDock, which implements a variant of the Fully Informed Particle Swarm (FIPS) optimization method and adopts the newly developed energy function of AutoDock 4.20 suite for solving flexible protein-ligand docking problems. The search ability and docking accuracy of FIPSDock were first evaluated by multiple cognate docking experiments. In a benchmarking test for 77 protein/ligand complex structures derived from GOLD benchmark set, FIPSDock has obtained a successful predicting rate of 93.5% and outperformed a few docking programs including particle swarm optimization (PSO)@AutoDock, SODOCK, AutoDock, DOCK, Glide, GOLD, FlexX, Surflex, and MolDock. More importantly, FIPSDock was evaluated against PSO@AutoDock, SODOCK, and AutoDock 4.20 suite by cross-docking experiments of 74 protein-ligand complexes among eight protein targets (CDK2, ESR1, F2, MAPK14, MMP8, MMP13, PDE4B, and PDE5A) derived from Sutherland-crossdock-set. Remarkably, FIPSDock is superior to PSO@AutoDock, SODOCK, and AutoDock in seven out of eight cross-docking experiments. The results reveal that FIPS algorithm might be more suitable than the conventional genetic algorithm-based algorithms in dealing with highly flexible docking problems.
Winglet design using multidisciplinary design optimization techniques
NASA Astrophysics Data System (ADS)
Elham, Ali; van Tooren, Michel J. L.
2014-10-01
A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.
Acoustic emission location on aluminum alloy structure by using FBG sensors and PSO method
NASA Astrophysics Data System (ADS)
Lu, Shizeng; Jiang, Mingshun; Sui, Qingmei; Dong, Huijun; Sai, Yaozhang; Jia, Lei
2016-04-01
Acoustic emission location is important for finding the structural crack and ensuring the structural safety. In this paper, an acoustic emission location method by using fiber Bragg grating (FBG) sensors and particle swarm optimization (PSO) algorithm were investigated. Four FBG sensors were used to form a sensing network to detect the acoustic emission signals. According to the signals, the quadrilateral array location equations were established. By analyzing the acoustic emission signal propagation characteristics, the solution of location equations was converted to an optimization problem. Thus, acoustic emission location can be achieved by using an improved PSO algorithm, which was realized by using the information fusion of multiple standards PSO, to solve the optimization problem. Finally, acoustic emission location system was established and verified on an aluminum alloy plate. The experimental results showed that the average location error was 0.010 m. This paper provided a reliable method for aluminum alloy structural acoustic emission location.
Assembly, checkout, and operation optimization analysis technique for complex systems
NASA Technical Reports Server (NTRS)
1968-01-01
Computerized simulation model of a launch vehicle/ground support equipment system optimizes assembly, checkout, and operation of the system. The model is used to determine performance parameters in three phases or modes - /1/ systems optimization techniques, /2/ operation analysis methodology, and /3/ systems effectiveness analysis technique.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
An optimal GPS data processing technique
NASA Technical Reports Server (NTRS)
Wu, S. C.; Melbourne, W. G.
1992-01-01
A formula is derived to optimally combine dual-frequency GPS (Global Positioning System) pseudorange and carrier phase data streams into a single equivalent data stream, reducing the data volume and computing time in the filtering process for parameter estimation by a factor of four. The resulting single data stream is that of carrier phase measurements with both data noise and bias uncertainty strictly defined. With this analytical formula the single stream of equivalent GPS measurements can be efficiently formed by simple numerical calculations without any degradation in data strength. The formulation for the optimally combined GPS data and their covariances are given in closed form. Carrier phase ambiguity resolution, when feasible, is improved due to the preservation of the full data strength with the optimal data combining process.
Variable Expansion Techniques for Decomposable Optimization Problems
2011-03-05
meration or dynamic programming. Recall the edge partition problem studied by Taskin et al. above in reference 7. (Z. Caner Taskin was supported by the...Stochastic Integer Program- ming, August 2009 August 2009. 12 2. Z. Caner Taskin, Algorithms for Solving Multi-Level Optimization Problems with Dis
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem that is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
An RBF-PSO based approach for modeling prostate cancer
NASA Astrophysics Data System (ADS)
Perracchione, Emma; Stura, Ilaria
2016-06-01
Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.
An improved PSO algorithm for parameter identification of nonlinear dynamic hysteretic models
NASA Astrophysics Data System (ADS)
Zhang, Junhao; Xia, Pinqi
2017-02-01
The nonlinear dynamic hysteretic models used in nonlinear dynamic analysis contain generally lots of model parameters which need to be identified accurately and effectively. The accuracy and effectiveness of identification depend generally on the complexity of model, number of model parameters and proximity of initial values of the parameters. The particle swarm optimization (PSO) algorithm has the random searching ability and has been widely applied to the parameter identification in the nonlinear dynamic hysteretic models. However, the PSO algorithm may get trapped in the local optimum and appear the premature convergence not to obtain the real optimum results. In this paper, an improved PSO algorithm for identifying parameters of nonlinear dynamic hysteretic models has been presented by defining a fitness function for hysteretic model. The improved PSO algorithm can enhance the global searching ability and avoid to appear the premature convergence of the conventional PSO algorithm, and has been applied to identify the parameters of two nonlinear dynamic hysteretic models which are the Leishman-Beddoes (LB) dynamic stall model of rotor blade and the anelastic displacement fields (ADF) model of elastomeric damper which can be used as the lead-lag damper in rotor. The accuracy and effectiveness of the improved PSO algorithm for identifying parameters of the LB model and the ADF model are validated by comparing the identified results with test results. The investigations have indicated that in order to reduce the influence of randomness caused by using the PSO algorithm on the accuracy of identified parameters, it is an effective method to increase the number of repeated identifications.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
Optimal and suboptimal control technique for aircraft spin recovery
NASA Technical Reports Server (NTRS)
Young, J. W.
1974-01-01
An analytic investigation has been made of procedures for effecting recovery from equilibrium spin conditions for three assumed aircraft configurations. Three approaches which utilize conventional aerodynamic controls are investigated. Included are a constant control recovery mode, optimal recoveries, and a suboptimal control logic patterned after optimal recovery results. The optimal and suboptimal techniques are shown to yield a significant improvement in recovery performance over that attained by using a constant control recovery procedure.
An investigation of optimization techniques for drawing computer graphics displays
NASA Technical Reports Server (NTRS)
Stocker, F. R.
1979-01-01
Techniques for reducing vector data plotting time are studied. The choice of tolerances in optimization and the application of optimization to plots produced on real time interactive display devices are discussed. All results are developed relative to plotting packages and support hardware so that results are useful in real world situations.
Multiswarm PSO with supersized swarms - Initial performance study
NASA Astrophysics Data System (ADS)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2016-06-01
In this paper it is discussed and briefly experimentally investigated the performance of multi-swarm PSO with super-sized swarms. The selection of proper population size is crucial for successful PSO using. This work follows previous promising research.
A PSO-based hybrid metaheuristic for permutation flowshop scheduling problems.
Zhang, Le; Wu, Jinnan
2014-01-01
This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm.
Chang, Wei-Der
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
Chang, Wei-Der
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
Optimization of detector positioning in the radioactive particle tracking technique.
Dubé, Olivier; Dubé, David; Chaouki, Jamal; Bertrand, François
2014-07-01
The radioactive particle tracking (RPT) technique is a non-intrusive experimental velocimetry and tomography technique extensively applied to the study of hydrodynamics in a great variety of systems. In this technique, arrays of scintillation detector are used to track the motion of a single radioactive tracer particle emitting isotropic γ-rays. This work describes and applies an optimization strategy developed to find an optimal set of positions for the scintillation detectors used in the RPT technique. This strategy employs the overall resolution of the detectors as the objective function and a mesh adaptive direct search (MADS) algorithm to solve the optimization problem. More precisely, NOMAD, a C++ implementation of the MADS algorithm is used. First, the optimization strategy is validated using simple cases with known optimal detector configurations. Next, it is applied to a three-dimensional axisymmetric system (i.e. a vertical cylinder, which could represent a fluidized bed, bubble column, riser or else). The results obtained using the optimization strategy are in agreement with what was previously recommended by Roy et al. (2002) for a similar system. Finally, the optimization strategy is used for a system consisting of a partially filled cylindrical tumbler. The application of insights gained by the optimization strategy is shown to lead to a significant reduction in the error made when reconstructing the position of a tracer particle. The results of this work show that the optimization strategy developed is sensitive to both the type of objective function used and the experimental conditions. The limitations and drawbacks of the optimization strategy are also discussed.
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Process sequence optimization for digital microfluidic integration using EWOD technique
NASA Astrophysics Data System (ADS)
Yadav, Supriya; Joyce, Robin; Sharma, Akash Kumar; Sharma, Himani; Sharma, Niti Nipun; Varghese, Soney; Akhtar, Jamil
2016-04-01
Micro/nano-fluidic MEMS biosensors are the devices that detects the biomolecules. The emerging micro/nano-fluidic devices provide high throughput and high repeatability with very low response time and reduced device cost as compared to traditional devices. This article presents the experimental details for process sequence optimization of digital microfluidics (DMF) using "electrowetting-on-dielectric" (EWOD). Stress free thick film deposition of silicon dioxide using PECVD and subsequent process for EWOD techniques have been optimized in this work.
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
NASA Astrophysics Data System (ADS)
Wu, Li-Li; Zhou, Qihou H.; Chen, Tie-Jun; Liang, J. J.; Wu, Xin
2015-09-01
Simultaneous derivation of multiple ionospheric parameters from the incoherent scatter power spectra in the F1 region is difficult because the spectra have only subtle differences for different combinations of parameters. In this study, we apply a particle swarm optimizer (PSO) to incoherent scatter power spectrum fitting and compare it to the commonly used least squares fitting (LSF) technique. The PSO method is found to outperform the LSF method in practically all scenarios using simulated data. The PSO method offers the advantages of not being sensitive to initial assumptions and allowing physical constraints to be easily built into the model. When simultaneously fitting for molecular ion fraction (fm), ion temperature (Ti), and ratio of ion to electron temperature (γT), γT is largely stable. The uncertainty between fm and Ti can be described as a quadratic relationship. The significance of this result is that Ti can be retroactively corrected for data archived many years ago where the assumption of fm may not be accurate, and the original power spectra are unavailable. In our discussion, we emphasize the fitting for fm, which is a difficult parameter to obtain. PSO method is often successful in obtaining fm, whereas LSF fails. We apply both PSO and LSF to actual observations made by the Arecibo incoherent scatter radar. The results show that PSO method is a viable method to simultaneously determine ion and electron temperatures and molecular ion fraction when the last is greater than 0.3.
Li, Yongjie; Yao, Dezhong; Yao, Jonathan; Chen, Wufan
2005-08-07
Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated.
A PSO-based rule extractor for medical diagnosis.
Hsieh, Yi-Zeng; Su, Mu-Chun; Wang, Pa-Chun
2014-06-01
One of the major bottlenecks in applying conventional neural networks to the medical field is that it is very difficult to interpret, in a physically meaningful way, because the learned knowledge is numerically encoded in the trained synaptic weights. In one of our previous works, we proposed a class of Hyper-Rectangular Composite Neural Networks (HRCNNs) of which synaptic weights can be interpreted as a set of crisp If-Then rules; however, a trained HRCNN may result in some ineffective If-Then rules which can only justify very few positive examples (i.e., poor generalization). This motivated us to propose a PSO-based Fuzzy Hyper-Rectangular Composite Neural Network (PFHRCNN) which applies particle swarm optimization (PSO) to trim the rules generated by a trained HRCNN while the recognition performance will not be degraded or even be improved. The performance of the proposed PFHRCNN is demonstrated on three benchmark medical databases including liver disorders data set, the breast cancer data set and the Parkinson's disease data set.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously
Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique
NASA Astrophysics Data System (ADS)
Oonsivilai, Anant; Marungsri, Boonruang
2008-10-01
An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods.
A method to objectively optimize coral bleaching prediction techniques
NASA Astrophysics Data System (ADS)
van Hooidonk, R. J.; Huber, M.
2007-12-01
Thermally induced coral bleaching is a global threat to coral reef health. Methodologies, e.g. the Degree Heating Week technique, have been developed to predict bleaching induced by thermal stress by utilizing remotely sensed sea surface temperature (SST) observations. These techniques can be used as a management tool for Marine Protected Areas (MPA). Predictions are valuable to decision makers and stakeholders on weekly to monthly time scales and can be employed to build public awareness and support for mitigation. The bleaching problem is only expected to worsen because global warming poses a major threat to coral reef health. Indeed, predictive bleaching methods combined with climate model output have been used to forecast the global demise of coral reef ecosystems within coming decades due to climate change. Accuracy of these predictive techniques has not been quantitatively characterized despite the critical role they play. Assessments have typically been limited, qualitative or anecdotal, or more frequently they are simply unpublished. Quantitative accuracy assessment, using well established methods and skill scores often used in meteorology and medical sciences, will enable objective optimization of existing predictive techniques. To accomplish this, we will use existing remotely sensed data sets of sea surface temperature (AVHRR and TMI), and predictive values from techniques such as the Degree Heating Week method. We will compare these predictive values with observations of coral reef health and calculate applicable skill scores (Peirce Skill Score, Hit Rate and False Alarm Rate). We will (a) quantitatively evaluate the accuracy of existing coral reef bleaching predictive methods against state-of- the-art reef health databases, and (b) present a technique that will objectively optimize the predictive method for any given location. We will illustrate this optimization technique for reefs located in Puerto Rico and the US Virgin Islands.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
FRAN and RBF-PSO as two components of a hyper framework to recognize protein folds.
Abbasi, Elham; Ghatee, Mehdi; Shiri, M E
2013-09-01
In this paper, an intelligent hyper framework is proposed to recognize protein folds from its amino acid sequence which is a fundamental problem in bioinformatics. This framework includes some statistical and intelligent algorithms for proteins classification. The main components of the proposed framework are the Fuzzy Resource-Allocating Network (FRAN) and the Radial Bases Function based on Particle Swarm Optimization (RBF-PSO). FRAN applies a dynamic method to tune up the RBF network parameters. Due to the patterns complexity captured in protein dataset, FRAN classifies the proteins under fuzzy conditions. Also, RBF-PSO applies PSO to tune up the RBF classifier. Experimental results demonstrate that FRAN improves prediction accuracy up to 51% and achieves acceptable multi-class results for protein fold prediction. Although RBF-PSO provides reasonable results for protein fold recognition up to 48%, it is weaker than FRAN in some cases. However the proposed hyper framework provides an opportunity to use a great range of intelligent methods and can learn from previous experiences. Thus it can avoid the weakness of some intelligent methods in terms of memory, computational time and static structure. Furthermore, the performance of this system can be enhanced throughout the system life-cycle.
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.
An improved fuzzy c-means clustering algorithm based on shadowed sets and PSO.
Zhang, Jian; Shen, Ling
2014-01-01
To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect.
Cardiac gene therapy: optimization of gene delivery techniques in vivo.
Katz, Michael G; Swain, JaBaris D; White, Jennifer D; Low, David; Stedman, Hansell; Bridges, Charles R
2010-04-01
Vector-mediated cardiac gene therapy holds tremendous promise as a translatable platform technology for treating many cardiovascular diseases. The ideal technique is one that is efficient and practical, allowing for global cardiac gene expression, while minimizing collateral expression in other organs. Here we survey the available in vivo vector-mediated cardiac gene delivery methods--including transcutaneous, intravascular, intramuscular, and cardiopulmonary bypass techniques--with consideration of the relative merits and deficiencies of each. Review of available techniques suggests that an optimal method for vector-mediated gene delivery to the large animal myocardium would ideally employ retrograde and/or anterograde transcoronary gene delivery,extended vector residence time in the coronary circulation, an increased myocardial transcapillary gradient using physical methods, increased endothelial permeability with pharmacological agents, minimal collateral gene expression by isolation of the cardiac circulation from the systemic, and have low immunogenicity.
NASA Astrophysics Data System (ADS)
Aravelli, Aparna; Rao, Singiresu S.
2013-10-01
The central chilled water plant is one of the major power-consuming units of a building. Even small reductions in power consumption could achieve significant energy conservation. Hence, optimization of a chiller plant is necessary for energy savings without compromising the comfort level of the end user. The present work deals with identifying the system parameters and developing a novel formulation for a chiller plant and its optimization using a hybrid optimization technique. The optimization model formulation is based on finding an optimal mix of equipment and operating parameters in the chiller plant for minimum electrical power consumption. It takes into account the performance characteristics of the chillers, cooling towers and pumps, and optimizes the energy consumed based on the required loads and the ambient atmospheric conditions. Sequential quadratic programming combined with the modified branch and bound method was used to develop the hybrid optimization algorithm. A case study is presented for a typical chiller plant. The results indicate that the present optimization method could be a potential method of making energy savings.
PSO based Gabor wavelet feature extraction and tracking method
NASA Astrophysics Data System (ADS)
Sun, Hongguang; Bu, Qian; Zhang, Huijie
2008-12-01
The paper is the study of 2D Gabor wavelet and its application in grey image target recognition and tracking. The new optimization algorithms and technologies in the system realization are studied and discussed in theory and practice. Optimization of Gabor wavelet's parameters of translation, orientation, and scale is used to make it approximates a local image contour region. The method of Sobel edge detection is used to get the initial position and orientation value of optimization in order to improve the convergence speed. In the wavelet characteristic space, we adopt PSO (particle swarm optimization) algorithm to identify points on the security border of the system, it can ensure reliable convergence of the target, which can improve convergence speed; the time of feature extraction is shorter. By test in low contrast image, the feasibility and effectiveness of the algorithm are demonstrated by VC++ simulation platform in experiments. Adopting improve Gabor wavelet method in target tracking and making up its frame of tracking, which realize moving target tracking used algorithm, and realize steady target tracking in circumrotate affine distortion.
Techniques for developing reliability-oriented optimal microgrid architectures
NASA Astrophysics Data System (ADS)
Patra, Shashi B.
2007-12-01
Alternative generation technologies such as fuel cells, micro-turbines, solar etc. have been the focus of active research in the past decade. These energy sources are small and modular. Because of these advantages, these sources can be deployed effectively at or near locations where they are actually needed, i.e. in the distribution network. This is in contrast to the traditional electricity generation which has been "centralized" in nature. The new technologies can be deployed in a "distributed" manner. Therefore, they are also known as Distributed Energy Resources (DER). It is expected that the use of DER, will grow significantly in the future. Hence, it is prudent to interconnect the energy resources in a meshed or grid-like structure, so as to exploit the reliability and economic benefits of distributed deployment. These grids, which are smaller in scale but similar to the electric transmission grid, are known as "microgrids". This dissertation presents rational methods of building microgrids optimized for cost and subject to system-wide and locational reliability guarantees. The first method is based on dynamic programming and consists of determining the optimal interconnection between microsources and load points, given their locations and the rights of way for possible interconnections. The second method is based on particle swarm optimization. This dissertation describes the formulation of the optimization problem and the solution methods. The applicability of the techniques is demonstrated in two possible situations---design of a microgrid from scratch and expansion of an existing distribution system.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Optimization Techniques for 3D Graphics Deployment on Mobile Devices
NASA Astrophysics Data System (ADS)
Koskela, Timo; Vatjus-Anttila, Jarkko
2015-03-01
3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
Imaging technique optimization of tungsten anode FFDM system
NASA Astrophysics Data System (ADS)
Chen, Biao; Smith, Andrew P.; Jing, Zhenxue; Ingal, Elena
2009-02-01
Single Mo target, Mo / Rh, or Mo / W bi-track targets with corresponding Mo and Rh filters have provided optimal target / filter combinations for traditional screen / film systems. In the advent of full-field digital mammography, similar target / filter combinations were adopted directly for digital imaging systems with direct and indirect conversion based detectors. To reduce the average glandular dose while maintaining the clinical image quality of FFDMs, alternative target / filter combinations have been investigated extensively to take advantages of the digital detectors with high dynamic range, high detection dose efficiency, and low noise level. This paper reports the development of a digital FFDM system that is equipped with single tungsten target and rhodium and silver filters. A mathematical model was constructed to quantitatively simulate x-ray spectra, breast compositions, contrast objects, x-ray scatter distribution, grid performance, and characteristics of a-Se flat panel detector. Computer simulations were performed to select kV/filter for different breast thickness and breast compositions through maximizing the contrast object detection dose efficiency. A set of phantom experiments were employed to optimize the x-ray techniques within the constraints of exposure time and required dose levels. A 50-micrometer rhodium filter was applied for thin and average breasts and a 50-micrometer silver filter for thicker breasts. To meet our design requirements and EUREF protocol specifications, we finely adjusted x-ray techniques for 0.45, 0.75, 1.0, 1.35 mGy dose modes with regards to ACR phantom scoring and PMMA phantom SNR/CNR performance, respectively. The optimal x-ray techniques significantly reduce average glandular dose while maintaining imaging performance.
Rafieerad, A R; Bushroa, A R; Nasiri-Tabrizi, B; Kaboli, S H A; Khanahmadi, S; Amiri, Ahmad; Vadivelu, J; Yusof, F; Basirun, W J; Wasa, K
2017-05-01
Recently, the robust optimization and prediction models have been highly noticed in district of surface engineering and coating techniques to obtain the highest possible output values through least trial and error experiments. Besides, due to necessity of finding the optimum value of dependent variables, the multi-objective metaheuristic models have been proposed to optimize various processes. Herein, oriented mixed oxide nanotubular arrays were grown on Ti-6Al-7Nb (Ti67) implant using physical vapor deposition magnetron sputtering (PVDMS) designed by Taguchi and following electrochemical anodization. The obtained adhesion strength and hardness of Ti67/Nb were modeled by particle swarm optimization (PSO) to predict the outputs performance. According to developed models, multi-objective PSO (MOPSO) run aimed at finding PVDMS inputs to maximize current outputs simultaneously. The provided sputtering parameters were applied as validation experiment and resulted in higher adhesion strength and hardness of interfaced layer with Ti67. The as-deposited Nb layer before and after optimization were anodized in fluoride-base electrolyte for 300min. To crystallize the coatings, the anodically grown mixed oxide TiO2-Nb2O5-Al2O3 nanotubes were annealed at 440°C for 30min. From the FESEM observations, the optimized adhesive Nb interlayer led to further homogeneity of mixed nanotube arrays. As a result of this surface modification, the anodized sample after annealing showed the highest mechanical, tribological, corrosion resistant and in-vitro bioactivity properties, where a thick bone-like apatite layer was formed on the mixed oxide nanotubes surface within 10 days immersion in simulated body fluid (SBF) after applied MOPSO. The novel results of this study can be effective in optimizing a variety of the surface properties of the nanostructured implants.
Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di
2013-01-15
The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.
A Hybrid PSO-DEFS Based Feature Selection for the Identification of Diabetic Retinopathy.
Balakrishnan, Umarani; Venkatachalapathy, Krishnamurthi; Marimuthu, Girirajkumar S
2015-01-01
Diabetic Retinopathy (DR) is an eye disease, which may cause blindness by the upsurge of insulin in blood. The major cause of visual loss in diabetic patient is macular edema. To diagnose and follow up Diabetic Macular Edema (DME), a powerful Optical Coherence Tomography (OCT) technique is used for the clinical assessment. Many existing methods found out the DME affected patients by estimating the fovea thickness. These methods have the issues of lower accuracy and higher time complexity. In order to overwhelm the above limitations, a hybrid approaches based DR detection is introduced in the proposed work. At first, the input image is preprocessed using green channel extraction and median filter. Subsequently, the features are extracted by gradient-based features like Histogram of Oriented Gradient (HOG) with Complete Local Binary Pattern (CLBP). The texture features are concentrated with various rotations to calculate the edges. We present a hybrid feature selection that combines the Particle Swarm Optimization (PSO) and Differential Evolution Feature Selection (DEFS) for minimizing the time complexity. A binary Support Vector Machine (SVM) classifier categorizes the 13 normal and 75 abnormal images from 60 patients. Finally, the patients affected by DR are further classified by Multi-Layer Perceptron (MLP). The experimental results exhibit better performance of accuracy, sensitivity, and specificity than the existing methods.
Design of high speed proprotors using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Mccarthy, Thomas R.; Chattopadhyay, Aditi
1993-01-01
A multidisciplinary optimization procedure is developed for the design of high speed proprotors. The objectives are to simultaneously maximize the propulsive efficiency in high speed cruise without sacrificing the rotor figure of merit in hover. Since the problem involves multiple design objectives, multiobjective function formulation techniques are used. A derailed two-celled isotropic box beam is used to model the load carrying member within the rotor blade. Constraints are imposed on rotor blade aeroelastic stability in cruise, the first natural frequency in hover and total blade weight. Both aerodynamic and structural design variables are used. The results obtained using both techniques are compared to the reference rotor and show significant aerodynamic performance improvements without sacrificing dynamic and aeroelastic stability requirements.
A novel noise optimization technique for inductively degenerated CMOS LNA
NASA Astrophysics Data System (ADS)
Zhiqing, Geng; Haiyong, Wang; Nanjian, Wu
2009-10-01
This paper proposes a novel noise optimization technique. The technique gives analytical formulae for the noise performance of inductively degenerated CMOS low noise amplifier (LNA) circuits with an ideal gate inductor for a fixed bias voltage and nonideal gate inductor for a fixed power dissipation, respectively, by mathematical analysis and reasonable approximation methods. LNA circuits with required noise figure can be designed effectively and rapidly just by using hand calculations of the proposed formulae. We design a 1.8 GHz LNA in a TSMC 0.25 μm CMOS process. The measured results show a noise figure of 1.6 dB with a forward gain of 14.4 dB at a power consumption of 5 mW, demonstrating that the designed LNA circuits can achieve low noise figure levels at low power dissipation.
Minimax Techniques For Optimizing Non-Linear Image Algebra Transforms
NASA Astrophysics Data System (ADS)
Davidson, Jennifer L.
1989-08-01
It has been well established that the Air Force Armament Technical Laboratory (AFATL) image algebra is capable of expressing all linear transformations [7]. The embedding of the linear algebra in the image algebra makes this possible. In this paper we show a relation of the image algebra to another algebraic system called the minimax algebra. This system is used extensively in economics and operations research, but until now has not been investigated for applications to image processing. The relationship is exploited to develop new optimization methods for a class of non-linear image processing transforms. In particular, a general decomposition technique for templates in this non-linear domain is presented. Template decomposition techniques are an important tool in mapping algorithms efficiently to both sequential and massively parallel architectures.
Modiri, A; Gu, X; Sawant, A
2014-06-15
Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. These plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.
Optimized evaporation technique for leachate treatment: Small scale implementation.
Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz
2016-04-01
This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature.
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection.
Li, Xiaoli; Li, Chengwei
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection.
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection
Li, Xiaoli
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection. PMID:27635151
FPGA implementation of neuro-fuzzy system with improved PSO learning.
Karakuzu, Cihan; Karakaya, Fuat; Çavuşlu, Mehmet Ali
2016-07-01
This paper presents the first hardware implementation of neuro-fuzzy system (NFS) with its metaheuristic learning ability on field programmable gate array (FPGA). Metaheuristic learning of NFS for all of its parameters is accomplished by using the improved particle swarm optimization (iPSO). As a second novelty, a new functional approach, which does not require any memory and multiplier usage, is proposed for the Gaussian membership functions of NFS. NFS and its learning using iPSO are implemented on Xilinx Virtex5 xc5vlx110-3ff1153 and efficiency of the proposed implementation tested on two dynamic system identification problems and licence plate detection problem as a practical application. Results indicate that proposed NFS implementation and membership function approximation is as effective as the other approaches available in the literature but requires less hardware resources.
Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy
NASA Astrophysics Data System (ADS)
Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre
An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.
Application of multivariable search techniques to structural design optimization
NASA Technical Reports Server (NTRS)
Jones, R. T.; Hague, D. S.
1972-01-01
Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission
What is Particle Swarm optimization? Application to hydrogeophysics (Invited)
NASA Astrophysics Data System (ADS)
Fernández Martïnez, J.; García Gonzalo, E.; Mukerji, T.
2009-12-01
Inverse problems are generally ill-posed. This yields lack of uniqueness and/or numerical instabilities. These features cause local optimization methods without prior information to provide unpredictable results, not being able to discriminate among the multiple models consistent with the end criteria. Stochastic approaches to inverse problems consist in shifting attention to the probability of existence of certain interesting subsurface structures instead of "looking for a unique model". Some well-known stochastic methods include genetic algorithms and simulated annealing. A more recent method, Particle Swarm Optimization, is a global optimization technique that has been successfully applied to solve inverse problems in many engineering fields, although its use in geosciences is still limited. Like all stochastic methods, PSO requires reasonably fast forward modeling. The basic idea behind PSO is that each model searches the model space according to its misfit history and the misfit of the other models of the swarm. PSO algorithm can be physically interpreted as a damped spring-mass system. This physical analogy was used to define a whole family of PSO optimizers and to establish criteria, based on the stability of particle swarm trajectories, to tune the PSO parameters: inertia, local and global accelerations. In this contribution we show application to different low-cost hydrogeophysical inverse problems: 1) a salt water intrusion problem using Vertical Electrical Soundings, 2) the inversion of Spontaneous Potential data for groundwater modeling, 3) the identification of Cole-Cole parameters for Induced Polarization data. We show that with this stochastic approach we are able to answer questions related to risk analysis, such as what is the depth of the salt intrusion with a certain probability, or giving probabilistic bounds for the water table depth. Moreover, these measures of uncertainty are obtained with small computational cost and time, allowing us a very
Review of optimization techniques of polygeneration systems for building applications
NASA Astrophysics Data System (ADS)
Y, Rong A.; Y, Su; R, Lahdelma
2016-08-01
Polygeneration means simultaneous production of two or more energy products in a single integrated process. Polygeneration is an energy-efficient technology and plays an important role in transition into future low-carbon energy systems. It can find wide applications in utilities, different types of industrial sectors and building sectors. This paper mainly focus on polygeneration applications in building sectors. The scales of polygeneration systems in building sectors range from the micro-level for a single home building to the large- level for residential districts. Also the development of polygeneration microgrid is related to building applications. The paper aims at giving a comprehensive review for optimization techniques for designing, synthesizing and operating different types of polygeneration systems for building applications.
Design of vibration isolation systems using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.
Neoliberal Optimism: Applying Market Techniques to Global Health.
Mei, Yuyang
2016-09-23
Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible.
Optimal technique for deep breathing exercises after cardiac surgery.
Westerdahl, E
2015-06-01
Cardiac surgery patients often develop a restrictive pulmonary impairment and gas exchange abnormalities in the early postoperative period. Chest physiotherapy is routinely prescribed in order to reduce or prevent these complications. Besides early mobilization, positioning and shoulder girdle exercises, various breathing exercises have been implemented as a major component of postoperative care. A variety of deep breathing maneuvres are recommended to the spontaneously breathing patient to reduce atelectasis and to improve lung function in the early postoperative period. Different breathing exercises are recommended in different parts of the world, and there is no consensus about the most effective breathing technique after cardiac surgery. Arbitrary instructions are given, and recommendations on performance and duration vary between hospitals. Deep breathing exercises are a major part of this therapy, but scientific evidence for the efficacy has been lacking until recently, and there is a lack of trials describing how postoperative breathing exercises actually should be performed. The purpose of this review is to provide a brief overview of postoperative breathing exercises for patients undergoing cardiac surgery via sternotomy, and to discuss and suggest an optimal technique for the performance of deep breathing exercises.
PMSM Driver Based on Hybrid Particle Swarm Optimization and CMAC
NASA Astrophysics Data System (ADS)
Tu, Ji; Cao, Shaozhong
A novel hybrid particle swarm optimization (PSO) and cerebellar model articulation controller (CMAC) is introduced to the permanent magnet synchronous motor (PMSM) driver. PSO can simulate the random learning among the individuals of population and CMAC can simulate the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments and comparisons have been done in MATLAB/SIMULINK. Analysis among PSO, hybrid PSO-CMAC and CMAC feed-forward control is also given. The results prove that the electric torque ripple and torque disturbance of the PMSM driver can be reduced by using the hybrid PSO-CMAC algorithm.
[Research on living tree volume forecast based on PSO embedding SVM].
Jiao, You-Quan; Feng, Zhong-Ke; Zhao, Li-Xi; Xu, Wei-Heng; Cao, Zhong
2014-01-01
In order to establish volume model,living trees have to be fallen and be divided into many sections, which is a kind of destructive experiment. So hundreds of thousands of trees have been fallen down each year in China. To solve this problem, a new method called living tree volume accurate measurement without falling tree was proposed in the present paper. In the method, new measuring methods and calculation ways are used by using photoelectric theodolite and auxiliary artificial measurement. The diameter at breast height and diameter at ground was measured manually, and diameters at other heights were obtained by photoelectric theodolite. Tree volume and height of each tree was calculated by a special software that was programmed by the authors. Zhonglin aspens No. 107 were selected as experiment object, and 400 data records were obtained. Based on these data, a nonlinear intelligent living tree volume prediction model with Particle Swarm Optimization algorithm based on support vector machines (PSO-SVM) was established. Three hundred data records including tree height and diameter at breast height were randomly selected form a total of 400 data records as input data, tree volume as output data, using PSO-SVM tool box of Matlab7.11, thus a tree volume model was obtained. One hundred data records were used to test the volume model. The results show that the complex correlation coefficient (R2) between predicted and measured values is 0. 91, which is 2% higher than the value calculated by classic Spurr binary volume model, and the mean absolute error rates were reduced by 0.44%. Compared with Spurr binary volume model, PSO-SVM model has self-learning and self-adaption ability,moreover, with the characteristics of high prediction accuracy, fast learning speed,and a small sample size requirement, PSO-SVM model with well prospect is worth popularization and application.
Optimization technique for problems with an inequality constraint
NASA Technical Reports Server (NTRS)
Russell, K. J.
1972-01-01
General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.
Technique to optimize magnetic response of gelatin coated magnetic nanoparticles.
Parikh, Nidhi; Parekh, Kinnari
2015-07-01
The paper describes the results of optimization of magnetic response for highly stable bio-functionalize magnetic nanoparticles dispersion. Concentration of gelatin during in situ co-precipitation synthesis was varied from 8, 23 and 48 mg/mL to optimize magnetic properties. This variation results in a change in crystallite size from 10.3 to 7.8 ± 0.1 nm. TEM measurement of G3 sample shows highly crystalline spherical nanoparticles with a mean diameter of 7.2 ± 0.2 nm and diameter distribution (σ) of 0.27. FTIR spectra shows a shift of 22 cm(-1) at C=O stretching with absence of N-H stretching confirming the chemical binding of gelatin on magnetic nanoparticles. The concept of lone pair electron of the amide group explains the mechanism of binding. TGA shows 32.8-25.2% weight loss at 350 °C temperature substantiating decomposition of chemically bind gelatin. The magnetic response shows that for 8 mg/mL concentration of gelatin, the initial susceptibility and saturation magnetization is the maximum. The cytotoxicity of G3 sample was assessed in Normal Rat Kidney Epithelial Cells (NRK Line) by MTT assay. Results show an increase in viability for all concentrations, the indicative probability of a stimulating action of these particles in the nontoxic range. This shows the potential of this technique for biological applications as the coated particles are (i) superparamagnetic (ii) highly stable in physiological media (iii) possibility of attaching other drug with free functional group of gelatin and (iv) non-toxic.
Rotman Lens Sidewall Design and Optimization with Hybrid Hardware/Software Based Programming
2015-01-09
enhances the Artificial Immune System ( AIS ) algorithm with techniques borrowed from PSO and GA. Preliminary results with three challenging mathematical...test functions were shown to be promising in terms of the performance of the enhanced AIS (EAIS) algorithm. The views, opinions and/or findings...involves developing a hybrid bio-inspired optimization algorithm which enhances the Artificial Immune System ( AIS ) algorithm with techniques borrowed
PSO based PI controller design for a solar charger system.
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs).
2009-09-01
INFORMATION-CENTRIC APPROACH TO AUTONOMOUS TRAJECTORY PLANNING UTILIZING OPTIMAL CONTROL TECHNIQUES by Michael A. Hurni September 2009...Dissertation 4. TITLE AND SUBTITLE: An Information-centric Approach to Autonomous Trajectory Planning Utilizing Optimal Control Techniques 6...then finding a time- optimal time scaling for the path subject to the actuator limits. The direct approach searches for the trajectory directly
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.
Canonical PSO Based K-Means Clustering Approach for Real Datasets.
Dey, Lopamudra; Chakraborty, Sanjay
2014-01-01
"Clustering" the significance and application of this technique is spread over various fields. Clustering is an unsupervised process in data mining, that is why the proper evaluation of the results and measuring the compactness and separability of the clusters are important issues. The procedure of evaluating the results of a clustering algorithm is known as cluster validity measure. Different types of indexes are used to solve different types of problems and indices selection depends on the kind of available data. This paper first proposes Canonical PSO based K-means clustering algorithm and also analyses some important clustering indices (intercluster, intracluster) and then evaluates the effects of those indices on real-time air pollution database, wholesale customer, wine, and vehicle datasets using typical K-means, Canonical PSO based K-means, simple PSO based K-means, DBSCAN, and Hierarchical clustering algorithms. This paper also describes the nature of the clusters and finally compares the performances of these clustering algorithms according to the validity assessment. It also defines which algorithm will be more desirable among all these algorithms to make proper compact clusters on this particular real life datasets. It actually deals with the behaviour of these clustering algorithms with respect to validation indexes and represents their results of evaluation in terms of mathematical and graphical forms.
Canonical PSO Based K-Means Clustering Approach for Real Datasets
Dey, Lopamudra; Chakraborty, Sanjay
2014-01-01
“Clustering” the significance and application of this technique is spread over various fields. Clustering is an unsupervised process in data mining, that is why the proper evaluation of the results and measuring the compactness and separability of the clusters are important issues. The procedure of evaluating the results of a clustering algorithm is known as cluster validity measure. Different types of indexes are used to solve different types of problems and indices selection depends on the kind of available data. This paper first proposes Canonical PSO based K-means clustering algorithm and also analyses some important clustering indices (intercluster, intracluster) and then evaluates the effects of those indices on real-time air pollution database, wholesale customer, wine, and vehicle datasets using typical K-means, Canonical PSO based K-means, simple PSO based K-means, DBSCAN, and Hierarchical clustering algorithms. This paper also describes the nature of the clusters and finally compares the performances of these clustering algorithms according to the validity assessment. It also defines which algorithm will be more desirable among all these algorithms to make proper compact clusters on this particular real life datasets. It actually deals with the behaviour of these clustering algorithms with respect to validation indexes and represents their results of evaluation in terms of mathematical and graphical forms. PMID:27355083
Optimization of Fast Dissolving Etoricoxib Tablets Prepared by Sublimation Technique
Patel, D. M.; Patel, M. M.
2008-01-01
The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 32 full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q30) and dissolution efficiency after 30 min (DE30). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent. PMID:20390084
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng
2016-02-01
Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.
Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.
Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef
2009-12-01
In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
Onboard Science Techniques to Optimize Science Data Retrieval from Small Spacecraft
NASA Astrophysics Data System (ADS)
Lightholder, J.; Thompson, D. R.; Huffman, W.; Boland, J.; Chien, S.; Castillo-Rogez, J.
2016-10-01
Software strategies for new onboard science techniques which optimize science return under the constrains of interplanetary small spacecraft. These include size, power, attitude control and communications bandwidth.
Sorribas, Albert; Pozo, Carlos; Vilaprinyo, Ester; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Alves, Rui
2010-09-01
Cells are natural factories that can adapt to changes in external conditions. Their adaptive responses to specific stress situations are a result of evolution. In theory, many alternative sets of coordinated changes in the activity of the enzymes of each pathway could allow for an appropriate adaptive readjustment of metabolism in response to stress. However, experimental and theoretical observations show that actual responses to specific changes follow fairly well defined patterns that suggest an evolutionary optimization of that response. Thus, it is important to identify functional effectiveness criteria that may explain why certain patterns of change in cellular components and activities during adaptive response have been preferably maintained over evolutionary time. Those functional effectiveness criteria define sets of physiological requirements that constrain the possible adaptive changes and lead to different operation principles that could explain the observed response. Understanding such operation principles can also facilitate biotechnological and metabolic engineering applications. Thus, developing methods that enable the analysis of cellular responses from the perspective of identifying operation principles may have strong theoretical and practical implications. In this paper we present one such method that was designed based on nonlinear global optimization techniques. Our methodology can be used with a special class of nonlinear kinetic models known as GMA models and it allows for a systematic characterization of the physiological requirements that may underlie the evolution of adaptive strategies.
Swarm algorithms with chaotic jumps for optimization of multimodal functions
NASA Astrophysics Data System (ADS)
Krohling, Renato A.; Mendel, Eduardo; Campos, Mauro
2011-11-01
In this article, the use of some well-known versions of particle swarm optimization (PSO) namely the canonical PSO, the bare bones PSO (BBPSO) and the fully informed particle swarm (FIPS) is investigated on multimodal optimization problems. A hybrid approach which consists of swarm algorithms combined with a jump strategy in order to escape from local optima is developed and tested. The jump strategy is based on the chaotic logistic map. The hybrid algorithm was tested for all three versions of PSO and simulation results show that the addition of the jump strategy improves the performance of swarm algorithms for most of the investigated optimization problems. Comparison with the off-the-shelf PSO with local topology (l best model) has also been performed and indicates the superior performance of the standard PSO with chaotic jump over the standard both using local topology (l best model).
Sheet metal forming optimization by using surrogate modeling techniques
NASA Astrophysics Data System (ADS)
Wang, Hu; Ye, Fan; Chen, Lei; Li, Enying
2017-01-01
Surrogate assisted optimization has been widely applied in sheet metal forming design due to its efficiency. Therefore, to improve the efficiency of design and reduce the product development cycle, it is important for scholars and engineers to have some insight into the performance of each surrogate assisted optimization method and make them more flexible practically. For this purpose, the state-of-the-art surrogate assisted optimizations are investigated. Furthermore, in view of the bottleneck and development of the surrogate assisted optimization and sheet metal forming design, some important issues on the surrogate assisted optimization in support of the sheet metal forming design are analyzed and discussed, involving the description of the sheet metal forming design, off-line and online sampling strategies, space mapping algorithm, high dimensional problems, robust design, some challenges and potential feasible methods. Generally, this paper provides insightful observations into the performance and potential development of these methods in sheet metal forming design.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
NASA Astrophysics Data System (ADS)
Wang, Hang; Zhu, Yan; Li, Wenlong; Cao, Weixing; Tian, Yongchao
2014-01-01
A regional rice (Oryza sativa) grain yield prediction technique was proposed by integration of ground-based and spaceborne remote sensing (RS) data with the rice growth model (RiceGrow) through a new particle swarm optimization (PSO) algorithm. Based on an initialization/parameterization strategy (calibration), two agronomic indicators, leaf area index (LAI) and leaf nitrogen accumulation (LNA) remotely sensed by field spectra and satellite images, were combined to serve as an external assimilation parameter and integrated with the RiceGrow model for inversion of three model management parameters, including sowing date, sowing rate, and nitrogen rate. Rice grain yield was then predicted by inputting these optimized parameters into the reinitialized model. PSO was used for the parameterization and regionalization of the integrated model and compared with the shuffled complex evolution-University of Arizona (SCE-UA) optimization algorithm. The test results showed that LAI together with LNA as the integrated parameter performed better than each alone for crop model parameter initialization. PSO also performed better than SCE-UA in terms of running efficiency and assimilation results, indicating that PSO is a reliable optimization method for assimilating RS information and the crop growth model. The integrated model also had improved precision for predicting rice grain yield.
New video projection control room is OK with PSO
Buttress, J.
1996-11-01
Public Service Company of Oklahoma (PSO) has 473,000 electricity customers across the state. While power failures are unquestionably an inconvenience to residential customers and a loss of income to the utility, power outages can have serious financial effects on the region`s business community. Oil and natural gas producers, pipelines, aircraft and aerospace companies, farms, ranches and wood product producers rely on PSO to supply them with electricity. Historically, every supplier of electricity experiences and is responsible for correcting power supply failures regardless of circumstances. Therefore, to successfully serve its customers, PSO strives to identify three key pieces of information for each report of trouble it receives: Is the power off? If so, why? Approximately when will it be restored?
A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.
Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei
2017-03-13
The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.
NASA Astrophysics Data System (ADS)
Lin, Juan; Liu, Chenglian; Guo, Yongning
2014-10-01
The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.
The Platform Switching Approach to Optimize Split Crest Technique
Sammartino, G.; Cerone, V.; Gasparro, R.; Riccitiello, F.; Trosino, O.
2014-01-01
The split crest technique is a reliable procedure used simultaneously in the implant positioning. In the literature some authors describe a secondary bone resorption as postoperative complication. The authors show how platform switching can be able to avoid secondary resorption as complication of split crest technique. PMID:25165586
Optimizing Basic French Skills Utilizing Multiple Teaching Techniques.
ERIC Educational Resources Information Center
Skala, Carol
This action research project examined the impact of foreign language teaching techniques on the language acquisition and retention of 19 secondary level French I students, focusing on student perceptions of the effectiveness and ease of four teaching techniques: total physical response, total physical response storytelling, literature approach,…
An optimal GPS data processing technique for precise positioning
NASA Technical Reports Server (NTRS)
Wu, Sien-Chong; Melbourne, William G.
1993-01-01
A mathematical formula to optimally combine dual-frequency GPS pseudorange and carrier phase (integrated Doppler) data streams into a single data stream is derived in closed form. The data combination reduces the data volume and computing time in the filtering process for parameter estimation by a factor of 4 while preserving the full data strength for precise positioning. The resulting single data stream is that of carrier phase measurements with both data noise and bias uncertainty strictly defined. With this mathematical formula the single stream of optimally combined GPS measurements can be efficiently formed by simple numerical calculations. Carrier phase ambiguity resolution, when feasible, is strengthened due to the preserved full data strength with the optimally combined data and the resulting longer wavelength for the ambiguity to be resolved.
Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.
2000-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local
77 FR 42737 - Patient Safety Organizations: Delisting for Cause for The Steward Group PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Cause for The Steward Group PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of delisting. SUMMARY: AHRQ has delisted The Steward Group PSO as a Patient Safety Organization... Steward Group PSO failed to respond to a Notice of Preliminary Finding of Deficiency sent by AHRQ...
Towards the novel reasoning among particles in PSO by the use of RDF and SPARQL.
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez; Fister, Iztok
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further.
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO).
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312
Optimal Use of Wire-Assisted Techniques and Precut Sphincterotomy
Lee, Tae Hoon; Park, Sang-Heum
2016-01-01
Various endoscopic techniques have been developed to overcome the difficulties in biliary or pancreatic access during endoscopic retrograde cholangiopancreatography, according to the preference of the endoscopist or the aim of the procedures. In terms of endoscopic methods, guidewire-assisted cannulation is a commonly used and well-known initial cannulation technique, or an alternative in cases of difficult cannulation. In addition, precut sphincterotomy encompasses a range of available rescue techniques, including conventional precut, precut fistulotomy, transpancreatic septotomy, and precut after insertion of pancreatic stent or pancreatic duct guidewire-guided septal precut. We present a literature review of guidewire-assisted cannulation as a primary endoscopic method and the precut technique for the facilitation of selective biliary access. PMID:27642848
Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1987-01-01
A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.
Development and Application of Optimization Techniques for Composite Laminates.
1983-09-01
Table Page 1. Algorithm Performance ............. 40 2. Material Properties .............. 51 3. Comparison of Approximate Strain-Sphere to Tsai-Wu...constraint, based on "smeared" laminate properties . The optimization routines are coupled to a finite element code to update the stress state as the...failure criteria with a 4 ,- _ . ~ . - -. - ° - .l ’ ’ • -_ , . ,- .. - ,c." . . -. -.- : . . . . . . . .• r -"- - - complete set of laminate property
Adaptive Optimization Techniques for Large-Scale Stochastic Planning
2011-06-28
cannot be kept longer than a few weeks. The decision maker must decide on blood - type substitutions that minimize the chance of future shortage. Because...optimal blood - type substitution is a large stochastic problem. Another application is managing water reservoirs. In this domain, an operator needs to decide...compatibility constraints among blood types , blood inventory management does not fit well the standard inventory control framework. In reservoir management
An Optimal Cell Detection Technique for Automated Patch Clamping
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2004-01-01
While there are several hardware techniques for the automated patch clamping of cells that describe the equipment apparatus used for patch clamping, very few explain the science behind the actual technique of locating the ideal cell for a patch clamping procedure. We present a machine vision approach to patch clamping cell selection by developing an intelligent algorithm technique that gives the user the ability to determine the good cell to patch clamp in an image within one second. This technique will aid the user in determining the best candidates for patch clamping and will ultimately save time, increase efficiency and reduce cost. The ultimate goal is to combine intelligent processing with instrumentation and controls in order to produce a complete turnkey automated patch clamping system capable of accurately and reliably patch clamping cells with a minimum amount of human intervention. We present a unique technique that identifies good patch clamping cell candidates based on feature metrics of a cell's (x, y) position, major axis length, minor axis length, area, elongation, roundness, smoothness, angle of orientation, thinness and whether or not the cell is only particularly in the field of view. A patent is pending for this research.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I
2016-12-01
Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Decomposition technique and optimal trajectories for the aeroassisted flight experiment
NASA Technical Reports Server (NTRS)
Miele, A.; Wang, T.; Deaton, A. W.
1990-01-01
An actual geosynchronous Earth orbit-to-low Earth orbit (GEO-to-LEO) transfer is considered with reference to the aeroassisted flight experiment (AFE) spacecraft, and optimal trajectories are determined by minimizing the total characteristic velocity. The optimization is performed with respect to the time history of the controls (angle of attack and angle of bank), the entry path inclination and the flight time being free. Two transfer maneuvers are considered: direct ascent (DA) to LEO and indirect ascent (IA) to LEO via parking Earth orbit (PEO). By taking into account certain assumptions, the complete system can be decoupled into two subsystems: one describing the longitudinal motion and one describing the lateral motion. The angle of attack history, the entry path inclination, and the flight time are determined via the longitudinal motion subsystem. In this subsystem, the difference between the instantaneous bank angle and a constant bank angle is minimized in the least square sense subject to the specified orbital inclination requirement. Both the angles of attack and the angle of bank are shown to be constant. This result has considerable importance in the design of nominal trajectories to be used in the guidance of AFE and aeroassisted orbital transfer (AOT) vehicles.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Optimal Control Technique for Many-Body Quantum Dynamics
NASA Astrophysics Data System (ADS)
Doria, Patrick; Calarco, Tommaso; Montangero, Simone
2011-05-01
We present an efficient strategy for controlling a vast range of nonintegrable quantum many-body one-dimensional systems that can be merged with state-of-the-art tensor network simulation methods such as the density matrix renormalization group. To demonstrate its potential, we employ it to solve a major issue in current optical-lattice physics with ultracold atoms: we show how to reduce by about 2 orders of magnitude the time needed to bring a superfluid gas into a Mott insulator state, while suppressing defects by more than 1 order of magnitude as compared to current experiments [T. Stöferle , Phys. Rev. Lett. 92, 130403 (2004)PRLTAO0031-900710.1103/PhysRevLett.92.130403]. Finally, we show that the optimal pulse is robust against atom number fluctuations.
Preliminary research on abnormal brain detection by wavelet-energy and quantum- behaved PSO.
Zhang, Yudong; Ji, Genlin; Yang, Jiquan; Wang, Shuihua; Dong, Zhengchao; Phillips, Preetha; Sun, Ping
2016-04-29
It is important to detect abnormal brains accurately and early. The wavelet-energy (WE) was a successful feature descriptor that achieved excellent performance in various applications; hence, we proposed a WE based new approach for automated abnormal detection, and reported its preliminary results in this study. The kernel support vector machine (KSVM) was used as the classifier, and quantum-behaved particle swarm optimization (QPSO) was introduced to optimize the weights of the SVM. The results based on a 5 × 5-fold cross validation showed the performance of the proposed WE + QPSO-KSVM was superior to ``DWT + PCA + BP-NN'', ``DWT + PCA + RBF-NN'', ``DWT + PCA + PSO-KSVM'', ``WE + BPNN'', ``WE +$ KSVM'', and ``DWT $+$ PCA $+$ GA-KSVM'' w.r.t. sensitivity, specificity, and accuracy. The work provides a novel means to detect abnormal brains with excellent performance.
SYNTHESIZING OPTIMAL STRATEGIES IN PURSUIT-EVASION GAMES BY THE EPSILON TECHNIQUE,
A constructive method for synthesizing optimal strategies in pursuit-evasion games is described using the epsilon technique as described by Balakrishnan. An illustrative example is worked out. (Author)
PSO-based methods for medical image registration and change assessment of pigmented skin
NASA Astrophysics Data System (ADS)
Kacenjar, Steve; Zook, Matthew; Balint, Michael
2011-03-01
's back topography. Since the skin is a deformable membrane, this process only provides an initial condition for subsequent refinements in aligning the localized topography of the skin. To achieve a refined enhancement, a Particle Swarm Optimizer (PSO) is used to optimally determine the local camera models associated with a generalized geometric transform. Here the optimization process is driven using the minimization of entropy between the multiple time-separated images. Once the camera models are corrected for local skin deformations, the images are compared using both pixel-based and regional-based methods. Limits on the detectability of change are established by the fidelity to which the algorithm corrects for local skin deformation and background alterations. These limits provide essential information in establishing early-warning thresholds for Melanoma detection. Key to this work is the development of a PSO alignment algorithm to perform the refined alignment in local skin topography between the time sequenced imagery (TSI). Test and validation of this alignment process is achieved using a forward model producing known geometric artifacts in the images and afterwards using a PSO algorithm to demonstrate the ability to identify and correct for these artifacts. Specifically, the forward model introduces local translational, rotational, and magnification changes within the image. These geometric modifiers are expected during TSI acquisition because of logistical issues to precisely align the patient to the image recording geometry and is therefore of paramount importance to any viable image registration system. This paper shows that the PSO alignment algorithm is effective in autonomously determining and mitigating these geometric modifiers. The degree of efficacy is measured by several statistically and morphologically based pre-image filtering operations applied to the TSI imagery before applying the PSO alignment algorithm. These trade studies show that global
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
An Enhanced Multi-Objective Optimization Technique for Comprehensive Aerospace Design
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
2000-01-01
An enhanced multiobjective formulation technique, capable of emphasizing specific objective functions during the optimization process, has been demonstrated on a complex multidisciplinary design application. The Kreisselmeier-Steinhauser (K-S) function approach, which has been used successfully in a variety of multiobjective optimization problems, has been modified using weight factors which enables the designer to emphasize specific design objectives during the optimization process. The technique has been implemented in two distinctively different problems. The first is a classical three bar truss problem and the second is a high-speed aircraft (a doubly swept wing-body configuration) application in which the multiobjective optimization procedure simultaneously minimizes the sonic boom and the drag-to-lift ratio (C(sub D)/C(sub L)) of the aircraft while maintaining the lift coefficient within prescribed limits. The results are compared with those of an equally weighted K-S multiobjective optimization. Results demonstrate the effectiveness of the enhanced multiobjective optimization procedure.
Akkasi, Abbas; Varoglu, Ekrem
2016-05-18
Named Entity Recognition (NER) is a basic step for large number of consequent text mining tasks in the biochemical domain. Increasing the performance of such recognition systems is of high importance and always poses a challenge. In this study, a new community based decision making system is proposed which aims at increasing the efficiency of NER systems in the chemical/drug name context. Particle Swarm Optimization (PSO) algorithm is chosen as the expert selection strategy along with the Bayesian combination method to merge the outputs of the selected classifiers as well as evaluate the fitness of the selected candidates. The proposed system performs in two steps. The first step is focuses on creating various numbers of baseline classifiers for NER with different features sets using the Conditional Random Fields (CRFs). The second step involves the selection and efficient combination of the classifiers using PSO and Bayesisan combination. Two comprehensive corpora from BioCreative events, namely ChemDNER and CEMP, are used for the experiments conducted. Results show that the ensemble of classifiers selected by means of the proposed approach perform better than the single best classifier as well as ensembles formed using other popular selection/combination strategies for both corpora. Furthermore, the proposed method outperforms the best performing system at the Biocreative IV ChemDNER track by achieving an F-score of 87.95%.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Development of a serodiagnostic test for sheep scab using recombinant protein Pso o 2.
Nunn, Francesca G; Burgess, Stewart T G; Innocent, Giles; Nisbet, Alasdair J; Bates, Peter; Huntley, John F
2011-01-01
Early stages of sheep scab, the disease caused by the non-burrowing mite Psoroptes ovis, are often sub-clinical, or can be mis-diagnosed. A diagnostic test capable of detecting early disease and latent infestations is therefore highly desirable in disease control. This paper describes the design and validation of an ELISA, which incorporates a recombinant P. ovis antigen (Pso o 2), for the early detection of anti-P. ovis serum antibodies in sheep. This ELISA was evaluated using sera from sheep infested with P. ovis (n = 58) and sheep (n = 433) with no P. ovis infestation as well as sheep infected with other parasites including gastrointestinal nematodes (GIN), or chewing lice. A receiver operating characteristic (ROC) curve analysis was generated using the ELISA results for 491 sheep sera with the area under the curve (AUC) being 0.97. An optimal OD(450) cut-off of >0.06 absorbance units gave a test sensitivity of 0.93 and specificity of 0.90. The Pso o 2-based ELISA was able to detect specific antibodies to P. ovis during early experimental infestation prior to disease patency, indicating its utility for detecting sub-clinical infestation.
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.
Local versus global optimal sports techniques in a group of athletes.
Huchez, Aurore; Haering, Diane; Holvoët, Patrice; Barbier, Franck; Begon, Mickael
2015-01-01
Various optimization algorithms have been used to achieve optimal control of sports movements. Nevertheless, no local or global optimization algorithm could be the most effective for solving all optimal control problems. This study aims at comparing local and global optimal solutions in a multistart gradient-based optimization by considering actual repetitive performances of a group of athletes performing a transition move on the uneven bars. Twenty-four trials by eight national-level female gymnasts were recorded using a motion capture system, and then multistart sequential quadratic programming optimizations were performed to obtain global optimal, local optimal and suboptimal solutions. The multistart approach combined with a gradient-based algorithm did not often find the local solution to be the best and proposed several other solutions including global optimal and suboptimal techniques. The qualitative change between actual and optimal techniques provided three directions for training: to increase hip flexion-abduction, to transfer leg and arm angular momentum to the trunk and to straighten hand path to the bar.
NASA Astrophysics Data System (ADS)
Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.
2016-10-01
In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.
Saraswathi, Saras; Sundaram, Suresh; Sundararajan, Narasimhan; Zimmermann, Michael; Nilsen-Hamilton, Marit
2011-01-01
A combination of Integer-Coded Genetic Algorithm (ICGA) and Particle Swarm Optimization (PSO), coupled with the neural-network-based Extreme Learning Machine (ELM), is used for gene selection and cancer classification. ICGA is used with PSO-ELM to select an optimal set of genes, which is then used to build a classifier to develop an algorithm (ICGA_PSO_ELM) that can handle sparse data and sample imbalance. We evaluate the performance of ICGA-PSO-ELM and compare our results with existing methods in the literature. An investigation into the functions of the selected genes, using a systems biology approach, revealed that many of the identified genes are involved in cell signaling and proliferation. An analysis of these gene sets shows a larger representation of genes that encode secreted proteins than found in randomly selected gene sets. Secreted proteins constitute a major means by which cells interact with their surroundings. Mounting biological evidence has identified the tumor microenvironment as a critical factor that determines tumor survival and growth. Thus, the genes identified by this study that encode secreted proteins might provide important insights to the nature of the critical biological features in the microenvironment of each tumor type that allow these cells to thrive and proliferate.
Optimization of hydrostatic transmissions by means of virtual instrumentation technique
NASA Astrophysics Data System (ADS)
Ion Guta, Dragos Daniel; Popescu, Teodor Costinel; Dumitrescu, Catalin
2010-11-01
Obtaining mathematical models, as close as possible to physical phenomena which are intended to be replicated or improved, help us in deciding how to optimize them. The introduction of computers in monitoring and controlling processes caused changes in technological systems. With support from the methods for identification of processes and from the power of numerical computing equipment, researchers and designers can shorten the period for development of applications in various fields by generating a solution as close as possible to reality, since the design stage [1]. The paper presents a hybrid solution of modeling / simulation of a hydrostatic transmission with mixed adjustment. For simulation and control of the examined process we have used two distinct environments, AMESim and LabVIEW. The proposed solution allows coupling of the system's model to the software control modules developed using virtual instrumentation. Simulation network of the analyzed system was "tuned" and validated by an actual model of the process. This paper highlights some aspects regarding energy and functional advantages of hydraulic transmissions based on adjustable volumetric machines existing in their primary and secondary sectors [2].
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-08-05
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.
Calculation of free fall trajectories based on numerical optimization techniques
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a means of computing free-fall (nonthrusting) trajectories from one specified point in the solar system to another specified point in the solar system in a given amount of time was studied. The problem is that of solving a two-point boundary value problem for which the initial slope is unknown. Two standard methods of attack exist for solving two-point boundary value problems. The first method is known as the initial value or shooting method. The second method of attack for two-point boundary value problems is to approximate the nonlinear differential equations by an appropriate linearized set. Parts of both boundary value problem solution techniques described above are used. A complete velocity history is guessed such that the corresponding position history satisfies the given boundary conditions at the appropriate times. An iterative procedure is then followed until the last guessed velocity history and the velocity history obtained from integrating the acceleration history agree to some specified tolerance everywhere along the trajectory.
Optimizing Sudden Passage in the Earth's-Field NMR Technique
NASA Astrophysics Data System (ADS)
Melton, B. F.; Pollak, V. L.
The equation of motion dM/ dt= γM × B( t) is solved numerically for the case B( t) = j Bp( t) + k Be. The field Beis a small static field, typically the earth's field. The field Bp( t) is a damped oscillation having frequency greater than, or on the order of, the precession frequency in field Be. Such oscillation inevitably occurs at the end of the rapid cutoff of the coil current used to polarize the sample. It is assumed that Bp( t) is initially large compared to Be, and that magnetization M is initially along the resultant field B. This is the usual situation in the earth's-field NMR technique when the polarizing field is produced by a coil of moderate to high impedance. It is shown that, when properly damped, the transient can be used to restore the magnetization to the x-yplane, thereby maximizing the amplitude of the subsequent free precession signal. The damping required is close to critical damping, so that the problem of circuit ringing when the coil is switched to receiver mode is also eliminated.
Hybrid Bacterial Foraging and Particle Swarm Optimization for detecting Bundle Branch Block.
Kora, Padmavathi; Kalva, Sri Ramakrishna
2015-01-01
Abnormal cardiac beat identification is a key process in the detection of heart diseases. Our present study describes a procedure for the detection of left and right bundle branch block (LBBB and RBBB) Electrocardiogram (ECG) patterns. The electrical impulses that control the cardiac beat face difficulty in moving inside the heart. This problem is termed as bundle branch block (BBB). BBB makes it harder for the heart to pump blood effectively through the heart circulatory system. ECG feature extraction is a key process in detecting heart ailments. Our present study comes up with a hybrid method combining two heuristic optimization methods: Bacterial Forging Optimization (BFO) and Particle Swarm Optimization (PSO) for the feature selection of ECG signals. One of the major controlling forces of BFO algorithm is the chemotactic movement of a bacterium that models a test solution. The chemotaxis process of the BFO depends on random search directions which may lead to a delay in achieving the global optimum solution. The hybrid technique: Bacterial Forging-Particle Swarm Optimization (BFPSO) incorporates the concepts from BFO and PSO and it creates individuals in a new generation. This BFPSO method performs local search through the chemotactic movement of BFO and the global search over the entire search domain is accomplished by a PSO operator. The BFPSO feature values are given as the input for the Levenberg-Marquardt Neural Network classifier.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Chemical complexity in astrophysical simulations: optimization and reduction techniques
NASA Astrophysics Data System (ADS)
Grassi, T.; Bovino, S.; Schleicher, D.; Gianturco, F. A.
2013-05-01
Chemistry plays a key role in the evolution of the interstellar medium, so it is highly important to follow its evolution in numerical simulations. However, it could easily dominate the computational cost when applied to large systems. In this paper we discuss two approaches to reduce these costs: (i) based on computational strategies, and (ii) based on the properties and on the topology of the chemical network. The first methods are more robust, while the second are meant to be giving important information on the structure of large, complex networks. We first discuss the numerical solvers for integrating the system of ordinary differential equations (ODE) associated with the chemical network, and then we propose a buffer method that decreases the computational time spent in solving the ODE system. We further discuss a flux-based method that allows one to determine and then cut on the fly the less active reactions. In addition we also present a topological approach for selecting the most probable species that will be active during the chemical evolution, thus gaining information on the chemical network that otherwise would be difficult to retrieve. This topological technique can also be used as an a priori reduction method for any size network. We implemented these methods into a 1D Lagrangian hydrodynamical code to test their effects: both classes lead to large computational speed-ups, ranging from ×2 to ×5. We have also tested some hybrid approaches finding that coupling the flux method with a buffer strategy gives the best trade-off between robustness and speed-up of calculations.
Muscle optimization techniques impact the magnitude of calculated hip joint contact forces.
Wesseling, Mariska; Derikx, Loes C; de Groote, Friedl; Bartels, Ward; Meyer, Christophe; Verdonschot, Nico; Jonkers, Ilse
2015-03-01
In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization techniques, computed muscle control (CMC) and the physiological inverse approach (PIA). We investigated their subsequent effects on HCFs during gait and sit to stand and found that at the first peak in gait at 15-20% of the gait cycle, CMC calculated the highest HCFs (median 3.9 times peak GRF (pGRF)). When comparing calculated HCFs to experimental HCFs reported in literature, the former were up to 238% larger. Both static optimization techniques produced lower HCFs (median 3.0 and 3.1 pGRF), while PIA included muscle dynamics without an excessive increase in HCF (median 3.2 pGRF). The increased HCFs in CMC were potentially caused by higher muscle forces resulting from co-contraction of agonists and antagonists around the hip. Alternatively, these higher HCFs may be caused by the slightly poorer tracking of the net joint moment by the muscle moments calculated by CMC. We conclude that the use of different optimization techniques affects calculated HCFs, and static optimization approached experimental values best.
Gradient vs. approximation design optimization techniques in low-dimensional convex problems
NASA Astrophysics Data System (ADS)
Fedorik, Filip
2013-10-01
Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.
Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan
2016-01-01
An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-09-02
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance.
An Artificial Intelligence Technique to Generate Self-Optimizing Experimental Designs.
1983-02-01
pattern or a binary chopping technique in the space of decision variables while carrying out a sequence of contiroLled experiments on the strategy ...7 AD-A127 764 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE 1/1 SELF-OPTIMIZING EXPERIME. .(U) ARIZONA STATE UNIV TEMPE GROUP FOR COMPUTER STUDIES...6 3 A - - II 1* Ii.LI~1 11. AI-. jMR.TR- 3 0 3 37 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE SELF-OPTIMIZING EXPERIMENTAL DESIGNS Nicholas V
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Mandal, Monalisa; Mukhopadhyay, Anirban
2014-01-01
The purpose of feature selection is to identify the relevant and non-redundant features from a dataset. In this article, the feature selection problem is organized as a graph-theoretic problem where a feature-dissimilarity graph is shaped from the data matrix. The nodes represent features and the edges represent their dissimilarity. Both nodes and edges are given weight according to the feature's relevance and dissimilarity among the features, respectively. The problem of finding relevant and non-redundant features is then mapped into densest subgraph finding problem. We have proposed a multiobjective particle swarm optimization (PSO)-based algorithm that optimizes average node-weight and average edge-weight of the candidate subgraph simultaneously. The proposed algorithm is applied for identifying relevant and non-redundant disease-related genes from microarray gene expression data. The performance of the proposed method is compared with that of several other existing feature selection techniques on different real-life microarray gene expression datasets.
Hashim, H A; Abido, M A
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.
Hashim, H. A.; Abido, M. A.
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
The autoantigen Pso p27: a post-translational modification of SCCA molecules.
Iversen, Ole-Jan; Lysvand, Hilde; Hagen, Lars
2011-05-01
Pso p27 is a protein antigen expressed in psoriatic lesions and is shown to participate in complement activating immune complexes in the affected skin. The objective of the present study was sequencing of the Pso p27 protein in an approach to identify a "Pso p27 gene". The analyses showed that Pso p27 represents several proteins with homologies to various Squamous Cell Carcinoma Antigens (SCCAs). The unique N-terminal and C-terminal ends of Pso p27, however, differ from the terminal ends of the SCCA molecules and indicate posttranslational digestions of SCCA molecules with highly specific endoproteases. These structural variations may play crucial roles with respect to the immunogenicity of Pso p27 and the failure of immunological tolerance.
NASA Astrophysics Data System (ADS)
Fournier, René; Mohareb, Amir
2016-01-01
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.
Fournier, René; Mohareb, Amir
2016-01-14
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.
A technique for optimizing electrode placement for electromyographic control of prostheses.
Walbran, Scott H; Calius, Emilio P; Dunlop, G; Anderson, Iain A
2009-01-01
We present a technique that enables optimization of Electromyographic (EMG) electrode placement for grasp recognition. Previous works have shown that sophisticated control techniques for prosthetic devices are becoming available; however the issue of electrode placement has yet to be addressed. By processing a rich field of data, it is possible to determine which of the data sets will allow for greatest accuracy in prosthetic control. Data has been collected and processed from 128 sites on a human forearm while two different grasps were performed. Using two different feature extraction techniques - integral of absolute value and differential absolute value - the difference in means between performing each grasp type has been analyzed. This resulted in several regions around the wrist and the elbow that would be optimal for this particular setup. While the optimization process has been used here for discrimination between two particular grasps, it has the potential to extend to any desired actuation pattern.
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi
2013-08-01
A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted
NASA Astrophysics Data System (ADS)
ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi
2013-08-01
A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
Particle Swarm Optimization with Double Learning Patterns.
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.
A multi-agent technique for contingency constrained optimal power flows
Talukdar, S.; Ramesh, V.C. . Engineering Design Research Center)
1994-05-01
This paper does three things. First, it proposes that each critical contingency in a power system be represented by a correction time'' (the time required to eliminate the violations produced by the contingency), rather than by a set of hard constraints. Second, it adds these correction times to an optimal power flow and decomposes the resulting problem into a number of smaller optimization problems. Third, it proposes a multiagent technique for solving the smaller problems in parallel. The agents encapsulate traditional optimization algorithms as well as a new algorithm, called the voyager, that generates starting points for the traditional algorithms. All the agents communicate asynchronously, meaning that they can work in parallel without ever interrupting or delaying one another. The resulting scheme has potential for handling power system contingencies and other difficult global optimization problems.
Hierarchical winner-take-all particle swarm optimization social network for neural model fitting.
Coventry, Brandon S; Parthasarathy, Aravindakshan; Sommer, Alexandra L; Bartlett, Edward L
2017-02-01
Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models.
NASA Astrophysics Data System (ADS)
Djeffal, F.; Lakhdar, N.; Meguellati, M.; Benhaya, A.
2009-09-01
The analytical modeling of electron mobility in wurtzite Gallium Nitride (GaN) requires several simplifying assumptions, generally necessary to lead to compact expressions of electron transport characteristics for GaN-based devices. Further progress in the development, design and optimization of GaN-based devices necessarily requires new theory and modeling tools in order to improve the accuracy and the computational time of devices simulators. Recently, the evolutionary techniques, genetic algorithms ( GA) and particle swarm optimization ( PSO), have attracted considerable attention among various heuristic optimization techniques. In this paper, a particle swarm optimizer is implemented and compared to a genetic algorithm for modeling and optimization of new closed electron mobility model for GaN-based devices design. The performance of both optimization techniques in term of computational time and convergence rate is also compared. Further, our obtained results for both techniques ( PSO and GA) are tested and compared with numerical data (Monte Carlo simulations) where a good agreement has been found for wide range of temperature, doping and applied electric field. The developed analytical models can also be incorporated into the circuits simulators to study GaN-based devices without impact on the computational time and data storage.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
... Relinquishment From Universal Safety Solution PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS.... AHRQ has accepted a notification of voluntary relinquishment from Universal Safety Solution PSO of its... the list of federally approved PSOs. AHRQ has accepted a notification from Universal Safety...
76 FR 60495 - Patient Safety Organizations: Voluntary Relinquishment From Illinois PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... from the Illinois PSO of its status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109-41, 42 U.S.C. 299b-21--b-26,...
76 FR 7854 - Patient Safety Organizations: Voluntary Delisting From Lumetra PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... PSO, a component entity of Lumetra Healthcare Solutions, of its status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law...
Henriques, J.A.P.; Moustacchi, E.
1981-10-01
The mode of interaction in haploid Saccharomyces cerevisiae of two PSO mutations with each other and with rad mutations affected in their excision resynthesis (rad3), error-prone (rad6), and deoxyribonucleic acid double-strand break (rad52) repair pathways was determined for various double mutant combinations. Survival data for 8-methoxypsoralen photoaddition, 254-nm ultraviolet light and gamma rays are presented. For 8-methoxypsoralen photoaddition, which induces both deoxyribonucleic acid interstrand cross-links and monoadditions, is synergistic to rad3. The PSO2 mutation, which is specifically sensitive to photoaddition of psoralens, is epistatic to rad3 and demonstrates a nonepistatic interaction with rad6 and rad52. rad3 and rad6, as well as rad6 and rad52, show synergistic interactions with each other, whereas rad3 is epistatic to rad52. Consequently, it is proposed that PSO1 and RAD3 genes govern steps in two independent pathways, the PSO1 activity leading to an intermediate which is repaired via the three independent pathways controlled by RAD6, RAD52, and PSO2 genes. Since pso1 interacts synergistically with rad3 and rad52 and epistatically with rad6 after uv radiation, the PSO1 gene appears to belong to the RAD6 group. For gamma ray sensitivity, pso1 is epistatic to rad6 and rad52, which suggests that this gene controls a step which is common to the two other independent pathways.
An Innovative Method of Teaching Electronic System Design with PSoC
ERIC Educational Resources Information Center
Ye, Zhaohui; Hua, Chengying
2012-01-01
Programmable system-on-chip (PSoC), which provides a microprocessor and programmable analog and digital peripheral functions in a single chip, is very convenient for mixed-signal electronic system design. This paper presents the experience of teaching contemporary mixed-signal electronic system design with PSoC in the Department of Automation,…
Zhang, Yong; Gong, Dun-Wei; Cheng, Jian
2017-01-01
Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.
Matott, L Shawn; Bartelt-Hunt, Shannon L; Rabideau, Alan J; Fowler, K R
2006-10-15
Although heuristic optimization techniques are increasingly applied in environmental engineering applications, algorithm selection and configuration are often approached in an ad hoc fashion. In this study, the design of a multilayer sorptive barrier system served as a benchmark problem for evaluating several algorithm-tuning procedures, as applied to three global optimization techniques (genetic algorithms, simulated annealing, and particle swarm optimization). Each design problem was configured as a combinatorial optimization in which sorptive materials were selected for inclusion in a landfill liner to minimize the transport of three common organic contaminants. Relative to multilayer sorptive barrier design, study results indicate (i) the binary-coded genetic algorithm is highly efficient and requires minimal tuning, (ii) constraint violations must be carefully integrated to avoid poor algorithm convergence, and (iii) search algorithm performance is strongly influenced by the physical-chemical properties of the organic contaminants of concern. More generally, the results suggest that formal algorithm tuning, which has not been widely applied to environmental engineering optimization, can significantly improve algorithm performance and provide insight into the physical processes that control environmental systems.
Liu, Peng; Zhou, Yi-Feng; Yang, Peng; Gao, Yan-Sha; Zhao, Gui-Ru; Ren, Shi-Yan; Li, Xian-Lun
2016-01-01
Background: The conventional venous access for cardiovascular implantable electronic device (CIED) is the subclavian vein, which is often accompanied by high complication rate. The aim of this study was to assess the efficacy and safety of optimized axillary vein technique. Methods: A total of 247 patients undergoing CIED implantation were included and assigned to the axillary vein group or the subclavian vein group randomly. Success rate of puncture and complications in the perioperative period and follow-ups were recorded. Results: The overall success rate (95.7% vs. 96.0%) and one-time success rate (68.4% vs. 66.1%) of punctures were similar between the two groups. In the subclavian vein group, pneumothorax occurred in three patients. The subclavian gaps of three patients were too tight to allow operation of the electrode lead. In contrast, there were no puncture-associated complications in the axillary vein group. In the patient follow-ups, two patients in the subclavian vein group had subclavian crush syndrome and both of them received lead replacement. The incidence of complications during the perioperative period and follow-ups of the axillary vein group and the subclavian vein group was 1.6% (2/125) and 8.2% (10/122), respectively (χ2 = 5.813, P = 0.016). Conclusion: Optimized axillary vein technique may be superior to the conventional subclavian vein technique for CIED lead placement. Trial Registration: www.clinicaltrials.gov, NCT02358551; https://clinicaltrials.gov/ct2/show/NCT02358551?term=NCT02358551& rank=1. PMID:27823994
Optimized hyper beamforming of linear antenna arrays using collective animal behaviour.
Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad
2013-01-01
A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB.
Henriques, J A; Moustacchi, E
1981-01-01
The mode of interaction in haploid Saccharomyces cerevisiae of two pso mutations with each other and with rad mutations affected in their excision-resynthesis (rad3), error-prone (rad6), and deoxyribonucleic acid double-strand break (rad52) repair pathways was determined for various double mutant combinations. Survival data for 8-methoxypsoralen photoaddition, 254-nm ultraviolet light and gamma rays are presented. For 8-methoxypsoralen photoaddition, which induces both deoxyribonucleic acid interstrand cross-links and monoadditions, the pso1 mutation is epistatic to the rad6, rad52, and pso2 mutations, whereas it is synergistic to rad3. The pso2 mutation, which is specifically sensitive to photoaddition of psoralens, is epistatic to rad3 and demonstrates a nonepistatic interaction with rad6 and rad52. rad3 and rad6, as well as rad 6 and rad52, show synergistic interactions with each other, whereas rad 3 is epistatic to rad52. Consequently, it is proposed that PSO1 and RAD3 genes govern steps in the independent pathways. The PSO1 activity leading to an intermediate which is repaired via the three incidence pathways controlled by RAD6, RAD52, and PSO2 genes. Since pso1 interacts synergistically with rad3 and rad52 and epistatically with rad6 after UV radiation, the PSO1 gene appears to belong to the RAD6 group. For gamma ray sensitivity, pso1 is epistatic to rad6 and rad52, which suggests that this gene controls a step which is common to the two other independent pathways. PMID:7026532
Turbulent-PSO-Based Fuzzy Image Filter With No-Reference Measures for High-Density Impulse Noise.
Chou, Hsien-Hsin; Hsu, Ling-Yuan; Hu, Hwai-Tsu
2013-02-01
Digital images are often corrupted by impulsive noise during data acquisition, transmission, and processing. This paper presents a turbulent particle swarm optimization (PSO) (TPSO)-based fuzzy filtering (or TPFF for short) approach to remove impulse noise from highly corrupted images. The proposed fuzzy filter contains a parallel fuzzy inference mechanism, a fuzzy mean process, and a fuzzy composition process. To a certain extent, the TPFF is an improved and online version of those genetic-based algorithms which had attracted a number of works during the past years. As the PSO is renowned for its ability of achieving success rate and solution quality, the superiority of the TPFF is almost for sure. In particular, by using a no-reference Q metric, the TPSO learning is sufficient to optimize the parameters necessitated by the TPFF. Therefore, the proposed fuzzy filter can cope with practical situations where the assumption of the existence of the "ground-truth" reference does not hold. The experimental results confirm that the TPFF attains an excellent quality of restored images in terms of peak signal-to-noise ratio, mean square error, and mean absolute error even when the noise rate is above 0.5 and without the aid of noise-free images.
Lin, Chih-Hong
2016-09-01
Because the V-belt continuously variable transmission system spurred by permanent magnet (PM) synchronous motor has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the composite recurrent Laguerre orthogonal polynomials modified particle swarm optimization (PSO) neural network (NN) control system which has online learning capability to come back to the nonlinear and time-varying of system, is developed for controlling PM synchronous motor servo-driven V-belt continuously variable transmission system with the lumped nonlinear load disturbances. The composite recurrent Laguerre orthogonal polynomials NN control system consists of an inspector control, a recurrent Laguerre orthogonal polynomials NN control with adaptation law and a recouped control with estimation law. Moreover, the adaptation law of online parameters in the recurrent Laguerre orthogonal polynomials NN is originated from Lyapunov stability theorem. Additionally, two optimal learning rates of the parameters by means of modified PSO are posed in order to achieve better convergence. At last, comparative studies shown by experimental results are illustrated to demonstrate the control performance of the proposed control scheme.
Sequential design of linear quadratic state regulators via the optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Dib, H. M.; Yates, R. E.
1988-01-01
The use of well-known root-locus techniques for sequentially finding the weighting matrices and the linear quadratic state regulators of multivariable control systems in the frequency domain is considered. This sequential design method permits the retention of some stable open-loop poles and the associated eigenvectors in the closed-loop system; it also allows some optimal closed-loop poles to be placed in a specific region of the complex plane. In addition, it provides a design procedure for determining the weighting matrices and linear quadratic state regulators for the optimal control of multivariable systems in the frequency domain.
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was
Zhang, Heng; Xiong, Zhaokun; Ji, Fangzhou; Lai, Bo; Yang, Ping
2017-06-01
Shale gas drilling flowback fluid (SGDF) generated during shale gas extraction is of great concern due to its high total dissolved solid, radioactive elements and organic matter. To remove the toxic and refractory pollutants in SGDF and improve its biodegradability, a microsacle Fe(0)/Persulfate/O3 process (mFe(0)/PS/O3) was developed to pretreat this wastewater obtained from a shale gas well in southwestern China. First, effects of mFe(0) dosage, O3 flow rate, PS dosage, pH values on the treatment efficiency of mFe(0)/PS/O3 process were investigated through single-factor experiments. Afterward, the optimal conditions (i.e., pH = 6.7, mFe(0) dosage = 6.74 g/L, PS = 16.89 mmol/L, O3 flow rate = 0.73 L/min) were obtained by using response surface methodology (RSM). Under the optimal conditions, high COD removal (75.3%) and BOD5/COD ratio (0.49) were obtained after 120 min treatment. Moreover, compared with control experiments (i.e., mFe(0), O3, PS, mFe(0)/O3, mFe(0)/PS, O3/PS), mFe(0)/PS/O3 system exerted better performance for pollutants removal in SGDF due to strong synergistic effect between mFe(0), PS and O3. In addition, the decomposition or transformation of the organic pollutants in SGDF was analyzed by using GC-MS. Finally, the reaction mechanism of the mFe(0)/PS/O3 process was proposed according to the analysis results of SEM-EDS and XRD. It can be concluded that high-efficient mFe(0)/PS/O3 process was mainly resulted from the combination effect of direct oxidation by ozone and persulfate, heterogeneous and homogeneous catalytic oxidation, Fenton-like reaction and adsorption. Therefore, mFe(0)/PS/O3 process was proven to be an effective method for pretreatment of SGDF prior to biological treatment.
Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon
2009-02-01
Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations (Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions ("production of cereals," "resistance to soil erosion by water," and "landscape water retention"). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.
NASA Astrophysics Data System (ADS)
Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon
2009-02-01
Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.
Liu Wei; Li Yupeng; Li Xiaoqiang; Cao Wenhua; Zhang Xiaodong
2012-06-15
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique's sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans' sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT's sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the
Muehlig, Christian; Kufert, Siegfried; Bublitz, Simon; Speck, Uwe
2011-03-20
Using experimental results and numerical simulations, two measuring concepts of the laser induced deflection (LID) technique are introduced and optimized for absolute thin film absorption measurements from deep ultraviolet to IR wavelengths. For transparent optical coatings, a particular probe beam deflection direction allows the absorption measurement with virtually no influence of the substrate absorption, yielding improved accuracy compared to the common techniques of separating bulk and coating absorption. For high-reflection coatings, where substrate absorption contributions are negligible, a different probe beam deflection is chosen to achieve a better signal-to-noise ratio. Various experimental results for the two different measurement concepts are presented.
Eversion-Inversion Labral Repair and Reconstruction Technique for Optimal Suction Seal
Moreira, Brett; Pascual-Garrido, Cecilia; Chadayamurri, Vivek; Mei-Dan, Omer
2015-01-01
Labral tears are a significant cause of hip pain and are currently the most common indication for hip arthroscopy. Compared with labral debridement, labral repair has significantly better outcomes in terms of both daily activities and athletic pursuits in the setting of femoral acetabular impingement. The classic techniques described in the literature for labral repair all use loop or pass-through intrasubstance labral sutures to achieve a functional hip seal. This hip seal is important for hip stability and optimal joint biomechanics, as well as in the prevention of long-term osteoarthritis. We describe a novel eversion-inversion intrasubstance suturing technique for labral repair and reconstruction that can assist in restoration of the native labrum position by re-creating an optimal seal around the femoral head. PMID:26870648
A nonlinear interval number programming method based on RBF global optimization technique
NASA Astrophysics Data System (ADS)
Zhao, Ziheng; Han, Xu; Chao, Jiang
2010-05-01
In this paper, a new nonlinear interval-based programming (NIP) method based on Radial basis function (RBF) approximation models and RBF global search technique method is proposed. In NIP, searching for the extreme responses of objective and constraints are integrated with the main optimization, which leads to extremely low efficiency. Approximation models are commonly used to promote the computational efficiency. Consequently, two inevitable problems are encountered. The first one is how to obtain the global minimum and maximum in the sub-optimizations. The second one is how to diminish the approximation errors on the response bounds of system. The present method combined with RBF global search technique shows a good feature to overcome these problems. High accuracy and low computational cost can be achieved simultaneously. Two numerical examples are used to test the effectiveness of the present method.
Optimization of brushless direct current motor design using an intelligent technique.
Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay
2015-07-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique.
Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A
2008-01-01
A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.
Optimization techniques for FORTRAN 4 (G and H) programs written for the IBM 360 under OS
NASA Technical Reports Server (NTRS)
Dean, J. L.
1971-01-01
A fairly complete list is reported of programming techniques which are available to the programmer for optimizing the execution time of production programs written in FORTRAN IV (G and H) for the IBM 360 under OS. After the program has been complied under FORTRAN H, OPT=2, then the process of actually changing code begins. The bulk of execution time of FORTRAN programs can almost always be attributed to a few loops, and primary consideration is given to these.
Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Application of non-linear automatic optimization techniques for calibration of HSPF.
Iskra, Igor; Droste, Ronald
2007-06-01
Development of TMDLs (total maximum daily loads) is often facilitated by using the software system BASINS (Better Assessment Science Integrating point and Nonpoint Sources). One of the key elements of BASINS is the watershed model HSPF (Hydrological Simulation Program Fortran) developed by USEPA. Calibration of HSPF is a very tedious and time consuming task, more than 100 parameters are involved in the calibration process. In the current research, three non-linear automatic optimization techniques are applied and compared, as well an efficient way to calibrate HSPF is suggested. Parameter optimization using local and global optimization techniques for the watershed model is discussed. Approaches to automatic calibration of HSPF using the nonlinear parameter estimator PEST (Parameter Estimation Tool) with its Gauss-Marquardt-Levenberg (GML) method, Random multiple Search Method (RSM), and Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA) are presented. Sensitivity analysis was conducted and the most and the least sensitive parameters were identified. It was noted that sensitivity depends on number of adjustable parameters. As more parameters were optimized simultaneously--a wider range of parameter values can maintain the model in the calibrated state. Impact of GML, RSM, and SCE-UA variables on ability to find the global minimum of the objective function (OF) was studied and the best variables are suggested. All three methods proved to be more efficient than manual HSPF calibration. Optimization results obtained by these methods are very similar, although in most cases RSM outperforms GML and SCE-UA outperforms RSM. GML is a very fast method, it can perform as well as SCE-UA when the variables are properly adjusted, initial guess is good and insensitive parameters are eliminated from the optimization process. SCE-UA is very robust and convenient to use. Logical definition of key variables in most cases leads to the global minimum.
Luĉić, Felipe; Sánchez-Nieto, Beatriz; Caprile, Paola; Zelada, Gabriel; Goset, Karen
2013-09-06
Total skin electron irradiation (TSEI) has been used as a treatment for mycosis fungoides. Our center has implemented a modified Stanford technique with six pairs of 6 MeV adjacent electron beams, incident perpendicularly on the patient who remains lying on a translational platform, at 200 cm from the source. The purpose of this study is to perform a dosimetric characterization of this technique and to investigate its optimization in terms of energy characteristics, extension, and uniformity of the treatment field. In order to improve the homogeneity of the distribution, a custom-made polyester filter of variable thickness and a uniform PMMA degrader plate were used. It was found that the characteristics of a 9 MeV beam with an 8 mm thick degrader were similar to those of the 6 MeV beam without filter, but with an increased surface dose. The combination of the degrader and the polyester filter improved the uniformity of the distribution along the dual field (180cm long), increasing the dose at the borders of field by 43%. The optimum angles for the pair of beams were ± 27°. This configuration avoided displacement of the patient, and reduced the treatment time and the positioning problems related to the abutting superior and inferior fields. Dose distributions in the transversal plane were measured for the six incidences of the Stanford technique with film dosimetry in an anthropomorphic pelvic phantom. This was performed for the optimized treatment and compared with the previously implemented technique. The comparison showed an increased superficial dose and improved uniformity of the 85% isodose curve coverage for the optimized technique.
Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2015-12-01
To design a robust swimmer tracking system, we took into account two well-known tracking techniques: the nonlinear joint transform correlation (NL-JTC) and the color histogram. The two techniques perform comparably well, yet they both have substantial limitations. Interestingly, they also seem to show some complementarity. The correlation technique yields accurate detection but is sensitive to rotation, scale and contour deformation, whereas the color histogram technique is robust for rotation and contour deformation but shows low accuracy and is highly sensitive to luminosity and confusing background colors. These observations suggested the possibility of a dynamic fusion of the correlation plane and the color scores map. Before this fusion, two steps are required. First is the extraction of a sub-plane of correlation that describes the similarity between the reference and target images. This sub-plane has the same size as the color scores map but they have different interval values. Thus, the second step is required which is the normalization of the planes in the same interval so they can be fused. In order to determine the benefits of this fusion technique, first, we tested it on a synthetic image containing different forms with different colors. We thus were able to optimize the correlation plane and color histogram techniques before applying our fusion technique to real videos of swimmers in international competitions. Last, a comparative study of the dynamic fusion technique and the two classical techniques was carried out to demonstrate the efficacy of the proposed technique. The criteria of comparison were the tracking percentage, the peak to correlation energy (PCE), which evaluated the sharpness of the peak (accuracy), and the local standard deviation (Local-STD), which assessed the noise in the planes (robustness).
The Effects of Three PSO Genes on Induced Mutagenesis: A Novel Class of Mutationally Defective Yeast
Cassier, C.; Chanet, R.; Henriques, J. A. P.; Moustacchi, E.
1980-01-01
Reverse and forward mutation, induced by photoaddition of 8-methoxypsoralen (8-MOP) and 3-carbethoxypsoralen (3-CPs) or ultraviolet light (UV), are reduced in three pso mutants of Saccharomyces cerevisiae. The pso1–1 strain exhibits a lower frequency of spontaneous reversion (antimutator) and is almost entirely unaffected by the three agents in both the haploid and diploid states. The pso2–1 strain demonstrates very reduced frequencies of 8-MOP and 3-CPs plus 365 nm radiation-induced mutations in happloid and diploid cells. UV-induced mutations are slightly reduced, whereas survival is almost normal. The pso3–1 strain is mutable by 8-MOP and 3-CPs photoaddition only in the low-dose range. After UV treatment, survival of pso3–1 is nearly normal, whereas the frequencies of induced mutants are diminished as compared to the normal PSO+. An analogue of adenine, 6-N-hydroxyaminopurine, is capable of inducing reversions in wild type, as well as in pso and rad6–1 mutant strains, indicating that this drug may act as a direct mutagen in yeast. The comparison of photoaddition of the bifunctional agent (8-MOP) to that of the monofunctional one (3-CPs) confirms that cross-links, as well as monoadditions, are mutagenic in S. cerevisiae. Repair, of the recombinational type, taking place in diploid cells or in haploid cells in G2 phase leads to higher survival, but appears to be error-free. PMID:7021318
Application of an optimization technique to a physically based erosion model
NASA Astrophysics Data System (ADS)
Santos, Celso A. G.; Srinivasan, Vajapeyam S.; Suzuki, Koichi; Watanabe, Masahiro
2003-04-01
The difficulties involved in calibration of physically based erosion models have been partly attributable to the lack of robust optimization tools. This paper presents the essential concepts and application to optimize channel and plane parameters in an erosion model, with a global optimization method known as the SCE-UA (Duan et al., 1992. Water Resources Research 28(4): 1015-1031), which has recently shown promise as an effective and efficient optimization method for calibrating watershed models. It is based on the simplex method, and in order to improve its efficiency by making the simplex expand in a direction of more favourable conditions, or contract if a move was taken in a direction of less favourable conditions, new evolution steps have been introduced. The physically based erosion model that was chosen is called WESP (watershed erosion simulation program), developed by Lopes and Lane (1988. In Sediment Budgets, Bordas MP, Walling DE (eds). IAHS Publication no. 174. IAHS: Wallingford). The optimization technique was tested with the field data collected in an experimental watershed located in a semi-arid region of Brazil. On the basis of these results, the recommended erosion parameter values for a semi-arid region are given, which could serve as an initial estimate for other similar areas.
2012-01-01
Background Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations. This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Results Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. Conclusions In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the
Optimal control for a parallel hybrid hydraulic excavator using particle swarm optimization.
Wang, Dong-yun; Guan, Chen
2013-01-01
Optimal control using particle swarm optimization (PSO) is put forward in a parallel hybrid hydraulic excavator (PHHE). A power-train mathematical model of PHHE is illustrated along with the analysis of components' parameters. Then, the optimal control problem is addressed, and PSO algorithm is introduced to deal with this nonlinear optimal problem which contains lots of inequality/equality constraints. Then, the comparisons between the optimal control and rule-based one are made, and the results show that hybrids with the optimal control would increase fuel economy. Although PSO algorithm is off-line optimization, still it would bring performance benchmark for PHHE and also help have a deep insight into hybrid excavators.
Ji, Zhiwei; Wang, Bing
2014-01-01
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM) plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS) model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.
Optimized feline vitrectomy technique for therapeutic stem cell delivery to the inner retina
Jayaram, Hari; Becker, Silke; Eastlake, Karen; Jones, Megan F; Charteris, David G; Limb, G Astrid
2014-01-01
Objective To describe an optimized surgical technique for feline vitrectomy which reduces bleeding and aids posterior gel clearance in order to facilitate stem cell delivery to the inner retina using cellular scaffolds. Procedures Three-port pars plana vitrectomies were performed in six-specific pathogen-free domestic cats using an optimized surgical technique to improve access and minimize severe intraoperative bleeding. Results The surgical procedure was successfully completed in all six animals. Lens sparing vitrectomy resulted in peripheral lens touch in one of three animals but without cataract formation. Transient bleeding from sclerotomies, which was readily controlled, was seen in two of the six animals. No cases of vitreous hemorrhage, severe postoperative inflammation, retinal detachment, or endophthalmitis were observed during postoperative follow-up. Conclusions Three-port pars plana vitrectomy can be performed successfully in the cat in a safe and controlled manner when the appropriate precautions are taken to minimize the risk of developing intraoperative hemorrhage. This technique may facilitate the use of feline models of inner retinal degeneration for the development of stem cell transplantation techniques using cellular scaffolds. PMID:24661435
NASA Astrophysics Data System (ADS)
Azadi Moghaddam, Masoud; Kolahan, Farhad
2016-12-01
Face milling is an important and common machining operation because of its versatility and capability to produce various surfaces. Face milling is a machining process of removing material by the relative motion between a work piece and rotating cutter with multiple cutting edges. It is an interrupted cutting operation in which the teeth of the milling cutter enter and exit the work piece during each revolution. This paper is concerned with the experimental and numerical study of face milling of AISI1045. The proposed approach is based on statistical analysis on the experimental data gathered using Taguchi design matrix. Surface roughness is the most important performance characteristics of the face milling process. In this study the effect of input face milling process parameters on surface roughness of AISI1045 steel milled parts have been studied. The input parameters are cutting speed ( v), feed rate ( f z ) and depth of cut ( a p ). The experimental data are gathered using Taguchi L9 design matrix. In order to establish the relations between the input and the output parameters, various regression functions have been fitted on the data based on output characteristics. The significance of the process parameters on the quality characteristics of the process was also evaluated quantitatively using the analysis of variance method. Then, statistical analysis and validation experiments have been carried out to compare and select the best and most fitted models. In the last section of this research, mathematical model has been developed for surface roughness prediction using particle swarm optimization (PSO) on the basis of experimental results. The model developed for optimization has been validated by confirmation experiments. It has been found that the predicted roughness using PSO is in good agreement with the actual surface roughness.
Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong
2012-04-01
This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.
Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques
Hernandez, V. Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.
2015-04-15
Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.
Delahaye, P. Jardin, P.; Maunoury, L.; Angot, J.; Lamy, T.; Thuillier, T.; Celona, L.; Choinski, J.; Gmaj, P.; Koivisto, H.; Kolhinen, V.; Tarvainen, O.; Vondrasek, R.; Wenander, F.
2016-02-15
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2012-04-01
Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.
Wang, Shu-tao; Chen, Dong-ying; Wang, Xing-long; Wei, Meng; Wang, Zhi-fang
2015-12-01
In this paper, fluorescence spectra properties of potassium sorbate in aqueous solution and orange juice are studied, and the result.shows that in two solution there are many difference in fluorescence spectra of potassium sorbate, but the fluorescence characteristic peak exists in λ(ex)/λ(em) = 375/490 nm. It can be seen from the two dimensional fluorescence spectra that the relationship between the fluorescence intensity and the concentration of potassium sorbate is very complex, so there is no linear relationship between them. To determine the concentration of potassium sorbate in orange juice, a new method combining Particle Swarm Optimization (PSO) algorithm with Back Propagation (BP) neural network is proposed. The relative error of two predicted concentrations is 1.83% and 1.53% respectively, which indicate that the method is feasible. The PSO-BP neural network can accurately measure the concentration of potassium sorbate in orange juice in the range of 0.1-2.0 g · L⁻¹.
A soft self-repairing for FBG sensor network in SHM system based on PSO-SVR model reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Wang, Peng; Liang, Dakai; Fan, Chunfeng; Li, Cailing
2015-05-01
Structural health monitoring (SHM) system takes advantage of an array of sensors to continuously monitor a structure and provide an early prediction such as the damage position and damage degree etc. Such a system requires monitoring the structure in any conditions including bad condition. Therefore, it must be robust and survivable, even has the self-repairing ability. In this study, a model reconstruction predicting algorithm based on particle swarm optimization-support vector regression (PSO-SVR) is proposed to achieve the self-repairing of the Fiber Bragg Grating (FBG) sensor network in SHM system. Furthermore, an eight-point FBG sensor SHM system is experimented in an aircraft wing box. For the damage loading position prediction on the aircraft wing box, six kinds of disabled modes are experimentally studied to verify the self-repairing ability of the FBG sensor network in the SHM system, and the predicting performance are compared with non-reconstruction based on PSO-SVR model. The research results indicate that the model reconstruction algorithm has more excellence than that of non-reconstruction model, if partial sensors are invalid in the FBG-based SHM system, the predicting performance of the model reconstruction algorithm is almost consistent with that no sensor is invalid in the SHM system. In this way, the self-repairing ability of the FBG sensor is achieved for the SHM system, such the reliability and survivability of the FBG-based SHM system is enhanced if partial FBG sensors are invalid.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality.
NASA Technical Reports Server (NTRS)
Olds, John R.
1992-01-01
Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.
On large-scale nonlinear programming techniques for solving optimal control problems
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
Integration of ab-initio nuclear calculation with derivative free optimization technique
Sharda, Anurag
2008-01-01
Optimization techniques are finding their inroads into the field of nuclear physics calculations where the objective functions are very complex and computationally intensive. A vast space of parameters needs searching to obtain a good match between theoretical (computed) and experimental observables, such as energy levels and spectra. Manual calculation defies the scope of such complex calculation and are prone to error at the same time. This body of work attempts to formulate a design and implement it which would integrate the ab initio nuclear physics code MFDn and the VTDIRECT95 code. VTDIRECT95 is a Fortran95 suite of parallel code implementing the derivative-free optimization algorithm DIRECT. Proposed design is implemented for a serial and parallel version of the optimization technique. Experiment with the initial implementation of the design showing good matches for several single-nucleus cases are conducted. Determination and assignment of appropriate number of processors for parallel integration code is implemented to increase the efficiency and resource utilization in the case of multiple nuclei parameter search.
Optimization models and techniques for implementation and pricing of electricity markets
NASA Astrophysics Data System (ADS)
Madrigal Martinez, Marcelino
Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and
Grid optimization and multigrid techniques for fluid flow and transport problems
NASA Astrophysics Data System (ADS)
Pardhanani, Anand L.
1992-01-01
Special numerical techniques for the efficient and accurate solution of fluid flow and certain other transport processes are investigated. These include adaptive redistribution and optimization of computational grids, multigrid techniques for convection-diffusion problems, and multigrid time-marching methods for non-stationary and nonlinear problems. The grid optimization strategy is based on constructing and minimizing a mathematical objective function which defines the desired grid properties. For convection-diffusion problems, it is demonstrated that standard multigrid techniques fail when the coarse grids violate mesh-size restrictions. A variety of alternate multigrid strategies are explored, including artificial dissipation, fine grid pre-elimination, self-adjoint formulation, defect correction, and combination with grid redistribution. Multilevel techniques are also developed for time-dependent problems, including evolution problems with non-steady or transient solutions, and steady-state problems solved with artificial time-marching. Both explicit and implicit integration schemes are investigated, and it is shown that significant performance improvements can be gained with the use of multigrid. These techniques are implemented and tested on representative model problems as well as practical applications of current research interest. The grid investigations involve optimization in model problems, and in large-scale 3-D aircraft wing-body configurations. The multigrid applications range from model convection-diffusion problems, to time-dependent problems, to coupled nonlinear problems in two major application areas. The first application involves simulating spatio-temporal patterns in a coupled, nonlinear, reaction-diffusion problem that models the behavior of the Belousov-Zhabotinskii reaction. This multi-species reaction, which exhibits intricate patterns in laboratory experiments, has attracted considerable interest in the field of nonlinear dynamics. The
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2016-06-20
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink
Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376
Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.
Cho, Soobum; Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.
Yan, Ming; Wei, Ying-chun; Li, Xue-feng; Meng, Jin; Wu, Yun; Xiao, Wei
2015-10-01
The theoretical basis of the alcohol precipitation process control was provided, the alcohol precipitation was optimized and the relationship equation was got. The monod glycoside, loganin and paeoniflorin were used as the evaluation indexes to determine the impact factors of alcohol precipitation techniques of Liuwei Dihuang decoction by the Plackett-Burman experimental design and the levels of non-significant factors were identified. Then, Box-Behnken response surface methodology was used to research and discuss the critical process parameters influence the effect of alcohol precipitation and draw interaction between key process parameters and the correlation equation with index components. Through the establishment and solving the quadratic regression model of composite score, the optimum preparation conditions of alcohol precipitation techniques of liuwei were as follows: stirring speed was 580 r x min(-1), standing time was 17 hours, alcohol concentration was 34%, the density of Liuwei Dihuang decoction was 1.13. The response surface methodology for optimized alcohol precipitation techniques of Liuwei Dihuang decoction is. reasonable and feasible.
Cheng, Zhengjun; Zhang, Yuntao; Zhou, Changhong; Zhang, Wenjun; Gao, Shibo
2009-07-29
In the present work, the support vector machine (SVM) and Adaboost-SVM have been used to develop a classification model as a potential screening mechanism for a novel series of 5-HT(1A) selective ligands. Each compound is represented by calculated structural descriptors that encode topological features. The particle swarm optimization (PSO) and the stepwise multiple linear regression (Stepwise-MLR) methods have been used to search descriptor space and select the descriptors which are responsible for the inhibitory activity of these compounds. The model containing seven descriptors found by Adaboost-SVM, has showed better predictive capability than the other models. The total accuracy in prediction for the training and test set is 100.0% and 95.0% for PSO-Adaboost-SVM, 99.1% and 92.5% for PSO-SVM, 99.1% and 82.5% for Stepwise-MLR-Adaboost-SVM, 99.1% and 77.5% for Stepwise-MLR-SVM, respectively. The results indicate that Adaboost-SVM can be used as a useful modeling tool for QSAR studies.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1991-01-01
Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
a High-Level Technique for Estimation and Optimization of Leakage Power for Full Adder
NASA Astrophysics Data System (ADS)
Shrivas, Jayram; Akashe, Shyam; Tiwari, Nitesh
2013-06-01
Optimization of power is a very important issue in low-voltage and low-power application. In this paper, we have proposed power gating technique to reduce leakage current and leakage power of one-bit full adder. In this power gating technique, we use two sleep transistors i.e., PMOS and NMOS. PMOS sleep transistor is inserted between power supply and pull up network. And NMOS sleep transistor is inserted between pull down network and ground terminal. These sleep transistors (PMOS and NMOS) are turned on when the circuit is working in active mode. And sleep transistors (PMOS and NMOS) are turned off when circuit is working in standby mode. We have simulated one-bit full adder and compared with the power gating technique using cadence virtuoso tool in 45 nm technology at 0.7 V at 27°C. By applying this technique, we have reduced leakage current from 2.935 pA to 1.905 pA and leakage power from 25.04μw to 9.233μw. By using this technique, we have reduced leakage power up to 63.12%.
Lubbers, R.H.
1982-01-01
The subject of this dissertation is an analysis of techniques which can be used to expedite the approach to an optimal plan for the installation of new generating units in an electric utility system, using the WASP (Wein Automatic System Planning) program. The objectives are three-fold: to present the details and results of a sensitivity study performed using WASP; to analyze and overcome the logistical problems resulting in the excessive computation time required to complete a generation expansion study using WASP; and to compare WASP's results with those of another widely used generation expansion planning program - OGP (Optimized Generation Planning) - and to analyze how the differing modelling methodologies impact those results. The first objective was accomplished by providing a detailed description of the collection and preparation of input data for a sensitivity study and by reporting on trends noted when various economic and peak load growth data were varied. Accomplishment of the second objective led to the demonstration of three techniques for expediting WASP analyses, namely: employing a yearly optimization scheme, involving the weighting of the objective function with estimated operation and capital costs incurred during a static operation period, to arrive at an initial expansion plan; modelling generating units as a single block of capacity in order to decrease computation time with little sacrifice in precision; and using the static operation period to reduce end effects of the dynamic optimization. The third objective was accomplished through the comparison of the results of a sample planning study carried out using both WASP and OGP. Despite several areas in which modelling methodologies differed, startingly similar results were obtained.
NASA Technical Reports Server (NTRS)
Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.
1990-01-01
A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.
The Analysis and Design of Low Boom Configurations Using CFD and Numerical Optimization Techniques
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1999-01-01
The use of computational fluid dynamics (CFD) for the analysis of sonic booms generated by aircraft has been shown to increase the accuracy and reliability of predictions. CFD takes into account important three-dimensional and nonlinear effects that are generally neglected by modified linear theory (MLT) methods. Up to the present time, CFD methods have been primarily used for analysis or prediction. Some investigators have used CFD to impact the design of low boom configurations using trial and error methods. One investigator developed a hybrid design method using a combination of Modified Linear Theory (e.g. F-functions) and CFD to provide equivalent area due to lift driven by a numerical optimizer to redesign or modify an existing configuration to achieve a shaped sonic boom signature. A three-dimensional design methodology has not yet been developed that completely uses nonlinear methods or CFD. Constrained numerical optimization techniques have existed for some time. Many of these methods use gradients to search for the minimum of a specified objective function subject to a variety of design variable bounds, linear and nonlinear constraints. Gradient based design optimization methods require the determination of the objective function gradients with respect to each of the design variables. These optimization methods are efficient and work well if the gradients can be obtained analytically. If analytical gradients are not available, the objective gradients or derivatives with respect to the design variables must be obtained numerically. To obtain numerical gradients, say, for 10 design variables, might require anywhere from 10 to 20 objective function evaluations. Typically, 5-10 global iterations of the optimizer are required to minimize the objective function. In terms of using CFD as a design optimization tool, the numerical evaluation of gradients can require anywhere from 100 to 200 CFD computations per design for only 10 design variables. If one CFD
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.
1992-01-01
The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.
Reconstruction of plasma current profile of tokamaks using combinatorial optimization techniques
Kishimoto, Maki; Sakasai, Kaoru; Ara, Katuyuki; Suzuki, Yasuo; Fujita, Takaaki
1996-04-01
New methods to reconstruct plasma shape and plasma current distribution from magnetic measurements are proposed. The reconstruction of plasma current profile from magnetic measurements is regarded as an optimum allocation problem of currents into cross section of the vacuum vessel of the tokamak. For solving this optimization problem, the authors use two types of solutions: a genetic algorithm and a combined method of a Hopfield neural network and a genetic algorithm. The effectiveness of these methods is shown by the application of these techniques to JT-60U plasmas.
NASA Astrophysics Data System (ADS)
Kuai, Su-Lan; Hu, Xing-Fang; Haché, Alain; Truong, Vo-Van
2004-06-01
High-quality polystyrene colloidal crystals were fabricated from aqueous solutions with a vertical deposition technique. The role of sphere size, volume fraction, relative humidity (RH), evaporation temperature and the final drying conditions on the film quality were investigated. We found that all those parameters must be taken into account in order to achieve highest quality for a given particle size. With particles of 300 nm in diameter, the optimal conditions were found to be a 0.1-0.2% volume fraction, an RH between 80% and 90%, an evaporation temperature near 60°C and a quasi-equilibrium drying process.
Design and optimization of stepped austempered ductile iron using characterization techniques
Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.
2013-09-15
Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model
NASA Astrophysics Data System (ADS)
Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.
2014-02-01
In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.
Discrimination between Alzheimer's disease and mild cognitive impairment using SOM and PSO-SVM.
Yang, Shih-Ting; Lee, Jiann-Der; Chang, Tzyh-Chyang; Huang, Chung-Hsien; Wang, Jiun-Jie; Hsu, Wen-Chuin; Chan, Hsiao-Lung; Wai, Yau-Yau; Li, Kuan-Yi
2013-01-01
In this study, an MRI-based classification framework was proposed to distinguish the patients with AD and MCI from normal participants by using multiple features and different classifiers. First, we extracted features (volume and shape) from MRI data by using a series of image processing steps. Subsequently, we applied principal component analysis (PCA) to convert a set of features of possibly correlated variables into a smaller set of values of linearly uncorrelated variables, decreasing the dimensions of feature space. Finally, we developed a novel data mining framework in combination with support vector machine (SVM) and particle swarm optimization (PSO) for the AD/MCI classification. In order to compare the hybrid method with traditional classifier, two kinds of classifiers, that is, SVM and a self-organizing map (SOM), were trained for patient classification. With the proposed framework, the classification accuracy is improved up to 82.35% and 77.78% in patients with AD and MCI. The result achieved up to 94.12% and 88.89% in AD and MCI by combining the volumetric features and shape features and using PCA. The present results suggest that novel multivariate methods of pattern matching reach a clinically relevant accuracy for the a priori prediction of the progression from MCI to AD.
Fractional order fuzzy control of hybrid power system with renewable generation using chaotic PSO.
Pan, Indranil; Das, Saptarshi
2016-05-01
This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants.
Parameter extraction of solar cells using particle swarm optimization
NASA Astrophysics Data System (ADS)
Ye, Meiying; Wang, Xiaodong; Xu, Yousheng
2009-05-01
In this article, particle swarm optimization (PSO) was applied to extract the solar cell parameters from illuminated current-voltage characteristics. The performance of the PSO was compared with the genetic algorithms (GAs) for the single and double diode models. Based on synthetic and experimental current-voltage data, it has been confirmed that the proposed method can obtain higher parameter precision with better computational efficiency than the GA method. Compared with conventional gradient-based methods, even without a good initial guess, the PSO method can obtain the parameters of solar cells as close as possible to the practical parameters only based on a broad range specified for each of the parameters.
Particle Swarm Optimization with Watts-Strogatz Model
NASA Astrophysics Data System (ADS)
Zhu, Zhuanghua
Particle swarm optimization (PSO) is a popular swarm intelligent methodology by simulating the animal social behaviors. Recent study shows that this type of social behaviors is a complex system, however, for most variants of PSO, all individuals lie in a fixed topology, and conflict this natural phenomenon. Therefore, in this paper, a new variant of PSO combined with Watts-Strogatz small-world topology model, called WSPSO, is proposed. In WSPSO, the topology is changed according to Watts-Strogatz rules within the whole evolutionary process. Simulation results show the proposed algorithm is effective and efficient.
NASA Astrophysics Data System (ADS)
Yu, Tzu-Yang
2009-03-01
In the distant detection of debonding in glass fiber reinforced polymer (GFRP)-retrofitted concrete systems using radar NDE techniques, revealing the presence of debonding in reconstructed images is essential to the success of the techniques. An optimization scheme based on mathematical morphology is proposed for determining the optimal measurement and processing parameters in a distant radar NDE technique for debonding detection. Inverse synthetic aperture radar (ISAR) and backprojection algorithms are applied in the technique. Measurement (incident frequency and angle) and processing (frequency bandwidth and angular range) parameters are defined in this work. Performance of the optimization scheme is validated by laboratory ISAR measurements on GFRP-retrofitted concrete cylinders using radar signals in 8-18 GHz. From the results it is shown that better detection can be achieved by optimized measurements and processing.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Optimization of MKID noise performance via readout technique for astronomical applications
NASA Astrophysics Data System (ADS)
Czakon, Nicole G.; Schlaerth, James A.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas
2010-07-01
Detectors employing superconducting microwave kinetic inductance detectors (MKIDs) can be read out by measuring changes in either the resonator frequency or dissipation. We will discuss the pros and cons of both methods, in particular, the readout method strategies being explored for the Multiwavelength Sub/millimeter Inductance Camera (MUSIC) to be commissioned at the CSO in 2010. As predicted theoretically and observed experimentally, the frequency responsivity is larger than the dissipation responsivity, by a factor of 2-4 under typical conditions. In the absence of any other noise contributions, it should be easier to overcome amplifier noise by simply using frequency readout. The resonators, however, exhibit excess frequency noise which has been ascribed to a surface distribution of two-level fluctuators sensitive to specific device geometries and fabrication techniques. Impressive dark noise performance has been achieved using modified resonator geometries employing interdigitated capacitors (IDCs). To date, our noise measurement and modeling efforts have assumed an onresonance readout, with the carrier power set well below the nonlinear regime. Several experimental indicators suggested to us that the optimal readout technique may in fact require a higher readout power, with the carrier tuned somewhat off resonance, and that a careful systematic study of the optimal readout conditions was needed. We will present the results of such a study, and discuss the optimum readout conditions as well as the performance that can be achieved relative to BLIP.
NASA Astrophysics Data System (ADS)
Ito, Fuminori
2016-09-01
In this study, we report the optimization of a solvent evaporation technique for preparing monodisperse poly-(lactide-co-glycolide) (PLGA) nanospheres, from a mixture of solvents composed of ethanol and PVA solution. Various experimental conditions were investigated in order to control the particle size and size distribution of the nanospheres. In addition, nanospheres containing rifampicin (RFP, an antituberculosis drug), were prepared using PLGA of various molecular weights, to study the effects of RFP as a model hydrophobic drug. The results showed that a higher micro-homogenizer stirring rate facilitated the preparation of monodisperse PLGA nanospheres with a low coefficient of variation ( 20 %), with sizes below 200 nm. Increasing the PLGA concentration from 0.1 to 0.5 g resulted in an increase in the size of the obtained nanospheres from 130 to 174 nm. The molecular weight of PLGA had little effect on the particle sizes and particle size distributions of the nanospheres. However, the drug loading efficiencies of the obtained RFP/PLGA nanospheres decreased when the molecular weight of PLGA was increased. Based on these experiments, an optimized technique was established for the preparation of monodisperse PLGA nanospheres, using the method developed by the authors.
Reducing the impact of a desalination plant using stochastic modeling and optimization techniques
NASA Astrophysics Data System (ADS)
Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François
2009-02-01
SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer
Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni
2010-06-01
The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.
Hernandez, Victor; Arenas, Meritxell; Müller, Katrin; Gomez, David; Bonet, Marta
2013-01-01
To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statistical analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
Si, Lei; Wang, Zhongbin; Yang, Yinwei
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Yang, Yinwei; Zhang, Lin
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system.
NASA Astrophysics Data System (ADS)
Sabol, John M.; Wheeldon, Samuel J.; Jabri, Kadri N.
2006-03-01
With growing clinical acceptance of dual-energy chest radiography, there is increased interest in the application of dual-energy techniques to other clinical areas. This paper describes the creation and experimental validation of a poly-energetic signal-propagation model for technique optimization of new dual-energy clinical applications. The model is verified using phantom experiments simulating typical abdominal radiographic applications such as Intravenous Urography (IVU) and the detection of pelvic and sacral bone lesions or kidney stones in the presence of bowel gas. The model is composed of a spectral signal propagation component and an image-processing component. The spectral propagation component accepts detector specifications, X-ray spectra, phantom and imaging geometry as inputs, and outputs the detected signal and estimated noise. The image-processing module performs dual-energy logarithmic subtraction and returns figures-of-merit such as contrast and contrast-to-noise ratio (CNR), which are evaluated in conjunction with Monte Carlo calculations of dose. Phantoms assembled from acrylic, aluminum, and iodinated contrast-agent filled tubes were imaged using a range of kVp's and dose levels. Simulated and experimental results were compared by dose, clinical suitability, and system limitations in order to yield technique recommendations that optimize one or more figures-of-merit. The model accurately describes phantom images obtained in a low scatter environment. For the visualization of iodinated vessels in the abdomen and the detection of pelvic bone lesions, both simulated and experimental results indicate that dual-energy techniques recommended by the model yield significant improvements in CNR without significant increases in patient dose as compared to conventional techniques. For example the CNR of iodinated vessels can be doubled using two-thirds of the dose of a standard exam. Alternatively, in addition to a standard dose image, the clinician can
Optimization of the polarization remote-sensing techniques of the ocean
NASA Astrophysics Data System (ADS)
Krotkov, Nickolay A.; Kondranin, Timofei V.; Vasilkov, Alexander P.
1992-12-01
A numerical code has been developed to calculate Stokes parameters of the visible solar radiation, scattered in the atmosphere-ocean system. Mathematical modeling is used to examine spectral and angular (azimuth and zenith angle) variations of degree of polarization at sea level and at different heights in the atmosphere above the sea surface. On the basis of a developed computer code the efficiency of the polarization measurements for different optical passive remote sensing techniques of the ocean has been investigated. For the passive spectral measurements of the water bio-productivity (chlorophyll-a, dissolved organic matter, concentration of suspended particles) the polarizer can improve signal-to-background ratio. The magnitude of this effect and optimum direction of the polarizer depend upon height, viewing direction, and solar zenith angle. Within the framework of polarization remote sensing technique the influence of the observation height and viewing direction on the results of water turbidity measurements is investigated. Optimal viewing directions in such polarization passive remote sensing technique are discussed.
Good techniques optimize control of oil-based mud and solids
Phelps, J.; Hoopingarner, J.
1989-02-13
Effective techniques have been developed from work on dozens of North Sea Wells to minimize the amount of oil-based mud discharged to the sea while maintaining acceptable levels of solids. Pressure to reduce pollution during the course of drilling prompted the development of these techniques. They involve personnel and optimization of mud system and procedures. Case histories demonstrate that regulations may be met with economical techniques using existing technology. The benefits of low solids content are widely known, and are a key part of any successful mud program. Good solids control should result in lower mud costs and better drilling performance. Operators have specified high-performance shakers to accomplish this and have revised their mud programs with lower and lower allowable drilled solids percentages. This will pay off in certain areas. But with the U.K. Department of Energy regulations requiring cuttings oil discharge content (CODC) to be less than 150 g of oil/kg of dry solids discharge that went into effect Jan. 1, 1989, oil-loss control has a higher profile in the U.K. sector of the North Sea.
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a
Recursive Ant Colony Global Optimization: a new technique for the inversion of geophysical data
NASA Astrophysics Data System (ADS)
Gupta, D. K.; Gupta, J. P.; Arora, Y.; Singh, U. K.
2011-12-01
We present a new method called Recursive Ant Colony Global Optimization (RACO) technique, a modified form of general ACO, which can be used to find the best solutions to inversion problems in geophysics. RACO simulates the social behaviour of ants to find the best path between the nest and the food source. A new term depth has been introduced, which controls the extent of recursion. A selective number of cities get qualified for the successive depth. The results of one depth are used to construct the models for the next depth and the range of values for each of the parameters is reduced without any change to the number of models. The three additional steps performed after each depth, are the pheromone tracking, pheromone updating and city selection. One of the advantages of RACO over ACO is that if a problem has multiple solutions, then pheromone accumulation will take place at more than one city thereby leading to formation of multiple nested ACO loops within the ACO loop of the previous depth. Also, while the convergence of ACO is almost linear, RACO shows exponential convergence and hence is faster than the ACO. RACO proves better over some other global optimization techniques, as it does not require any initial values to be assigned to the parameters function. The method has been tested on some mathematical functions, synthetic self-potential (SP) and synthetic gravity data. The obtained results reveal the efficiency and practicability of the method. The method is found to be efficient enough to solve the problems of SP and gravity anomalies due to a horizontal cylinder, a sphere, an inclined sheet and multiple idealized bodies buried inside the earth. These anomalies with and without noise were inverted using the RACO algorithm. The obtained results were compared with those obtained from the conventional methods and it was found that RACO results are more accurate. Finally this optimization technique was applied to real field data collected over the Surda
SNEV(Prp19/PSO4) deficiency increases PUVA-induced senescence in mouse skin.
Monteforte, Rossella; Beilhack, Georg F; Grausenburger, Reinhard; Mayerhofer, Benjamin; Bittner, Reginald; Grillari-Voglauer, Regina; Sibilia, Maria; Dellago, Hanna; Tschachler, Erwin; Gruber, Florian; Grillari, Johannes
2016-03-01
Senescent cells accumulate during ageing in various tissues and contribute to organismal ageing. However, factors that are involved in the induction of senescence in vivo are still not well understood. SNEV(P) (rp19/) (PSO) (4) is a multifaceted protein, known to be involved in DNA damage repair and senescence, albeit only in vitro. In this study, we used heterozygous SNEV(+/-) mice (SNEV-knockout results in early embryonic lethality) and wild-type littermate controls as a model to elucidate the role of SNEV(P) (rp19/) (PSO) (4) in DNA damage repair and senescence in vivo. We performed PUVA treatment as model system for potently inducing cellular senescence, consisting of 8-methoxypsoralen in combination with UVA on mouse skin to induce DNA damage and premature skin ageing. We show that SNEV(P) (rp19/) (PSO) (4) expression decreases during organismal ageing, while p16, a marker of ageing in vivo, increases. In response to PUVA treatment, we observed in the skin of both SNEV(P) (rp19/) (PSO) (4) and wild-type mice an increase in γ-H2AX levels, a DNA damage marker. In old SNEV(P) (rp19/) (PSO) (4) mice, this increase is accompanied by reduced epidermis thickening and increase in p16 and collagenase levels. Thus, the DNA damage response occurring in the mouse skin upon PUVA treatment is dependent on SNEV(P) (rp19/) (PSO) (4) expression and lower levels of SNEV(P) (rp19/) (PSO) (4) , as in old SNEV(+/-) mice, result in increase in cellular senescence and acceleration of premature skin ageing.
NASA Astrophysics Data System (ADS)
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi
2016-04-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2017-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
Harding, D.C.; Eldred, M.S.; Witkowski, W.R.
1995-12-31
Type B radioactive material transport packages must meet strict Nuclear Regulatory Commission (NRC) regulations specified in 10 CFR 71. Type B containers include impact limiters, radiation or thermal shielding layers, and one or more containment vessels. In the past, each component was typically designed separately based on its driving constraint and the expertise of the designer. The components were subsequently assembled and the design modified iteratively until all of the design criteria were met. This approach neglects the fact that components may serve secondary purposes as well as primary ones. For example, an impact limiter`s primary purpose is to act as an energy absorber and protect the contents of the package, but can also act as a heat dissipater or insulator. Designing the component to maximize its performance with respect to both objectives can be accomplished using numerical optimization techniques.
NASA Astrophysics Data System (ADS)
Felgaer, Pablo; Britos, Paola; García-Martínez, Ramón
A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.
Optimization of a wood dryer kiln using the mixed integer programming technique: A case study
Gustafsson, S.I.
1999-07-01
When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.
Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Yoo, Seung-Yeun
2011-01-01
Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.
Reddy, Raghu M; Guntupalli, Kalpalatha K
2007-01-01
Chronic obstructive pulmonary disease (COPD) is a major global healthcare problem. Studies vary widely in the reported frequency of mechanical ventilation in acute exacerbations of COPD. Invasive intubation and mechanical ventilation may be associated with significant morbidity and mortality. A good understanding of the airway pathophysiology and lung mechanics in COPD is necessary to appropriately manage acute exacerbations and respiratory failure. The basic pathophysiology in COPD exacerbation is the critical expiratory airflow limitation with consequent dynamic hyperinflation. These changes lead to further derangement in ventilatory mechanics, muscle function and gas exchange which may result in respiratory failure. This review discusses the altered respiratory mechanics in COPD, ways to detect these changes in a ventilated patient and formulating ventilatory techniques to optimize management of respiratory failure due to exacerbation of COPD.
Reddy, Raghu M; Guntupalli, Kalpalatha K
2007-01-01
Chronic obstructive pulmonary disease (COPD) is a major global healthcare problem. Studies vary widely in the reported frequency of mechanical ventilation in acute exacerbations of COPD. Invasive intubation and mechanical ventilation may be associated with significant morbidity and mortality. A good understanding of the airway pathophysiology and lung mechanics in COPD is necessary to appropriately manage acute exacerbations and respiratory failure. The basic pathophysiology in COPD exacerbation is the critical expiratory airflow limitation with consequent dynamic hyperinflation. These changes lead to further derangement in ventilatory mechanics, muscle function and gas exchange which may result in respiratory failure. This review discusses the altered respiratory mechanics in COPD, ways to detect these changes in a ventilated patient and formulating ventilatory techniques to optimize management of respiratory failure due to exacerbation of COPD. PMID:18268918
Benchmarking and Optimizing Techniques for Inverting Images of DIII-D Soft X-Ray Emissions
NASA Astrophysics Data System (ADS)
Chandler, E.; Unterberg, E. A.; Shafer, M. W.; Wingen, A.
2012-10-01
A tangential 2-D soft x-ray (SXR) imaging system is installed on DIII-D to directly measure the 3-D magnetic topology at the plasma edge. This diagnostic allows the study of the plasma SXR emissivity at time resolutions >=,0 ms and spatial resolutions ˜1 cm. Extracting 3-D structure from the 2-D image requires the inversion of large ill-posed matrices - a ubiquitous problem in mathematics. The goal of this work is to reduce the memory usage and computational time of the inversion to a point where image inversions can be processed between shots. We implement the Phillips-Tikohnov and Maximum Entropy regularization techniques on a parallel GPU processor. To optimize the memory demands of computing these matrixes, effects of reducing the inversion grid size and binning images are analyzed and benchmarked. Further benchmarking includes a characterization of the final image quality (with respect to numerical and instrumentation noise).
Optimal sun-alignment techniques of large solar arrays in electric propulsion spacecraft
NASA Technical Reports Server (NTRS)
Meissinger, H. F.; Dailey, C. L.; Valgora, M. E.
1982-01-01
Optimum sun-alignment of large solar arrays in electric propulsion spacecraft operating in earth orbit requires periodic roll motions around the thrust axis, synchronized with the apparent conical motion of the sun line. This oscillation is sustained effectively with the aid of gravity gradient torques while only a small share of the total torque is being contributed by the attitude control system. Tuning the system for resonance requires an appropriate choice of moment-of-inertia characteristics. To minimize atmospheric drag at low orbital altitudes the solar array is oriented parallel, or nearly parallel, to the flight direction. This can increase the thrust-to-drag ratio by as much as an order of magnitude. Coupled with optimal roll orientation, this feathering technique will permit use of electric propulsion effectively at low altitudes in support of space shuttle or space station activities and in spiral ascent missions.
Getting it Right: Optimizing the Patient and Technique for the Procedure
Stammers, Alfred H.
2009-01-01
Abstract: The methodological approach to decision making in optimizing medical care has focused on using the best available evidence with the primary endpoint being patient outcome. Through this emphasis, quality becomes relegated as the quintessential factor in determining application of medical intervention and in directing resource allocation. When evidence is inconclusive or absent, then clinical judgment becomes elevated in the decision analysis schema. The paucity of well designed controlled trials in perfusion technology has resulted in a greater reliance on clinical judgment than published information. This has created an environment where significant variability exists throughout the perfusion community. The following report will discuss several reasons for this variability, and describe techniques where the preponderance of evidence is available supporting inclusion in perfusion practice. PMID:20092089
Lucero, V.; Meale, B.M.; Purser, F.E.
1990-01-01
The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1975-01-01
Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.
A Parallel Particle Swarm Optimizer
2003-01-01
by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.
Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data
NASA Astrophysics Data System (ADS)
Ye, Shirong; Xia, Pengfei; Cai, Changsheng
2016-09-01
The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.
NASA Astrophysics Data System (ADS)
Galanis, George; Famelis, Ioannis; Kalogeri, Christina
2014-10-01
The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Optimization of electrospinning techniques for the realization of nanofiber plastic lasers
NASA Astrophysics Data System (ADS)
Persano, L.; Moffa, M.; Fasano, V.; Montinaro, M.; Morello, G.; Resta, V.; Spadaro, D.; Gucciardi, P. G.; Maragò, O. M.; Camposeo, A.; Pisignano, D.
2016-02-01
Electrospinning technologies for the realization of active polymeric nanomaterials can be easily up-scaled, opening perspectives to industrial exploitation, and due to their versatility they can be employed to finely tailor the size, morphology and macroscopic assembly of fibers as well as their functional properties. Light-emitting or other active polymer nanofibers, made of conjugated polymers or of blends embedding chromophores or other functional dopants, are suitable for various applications in advanced photonics and sensing technologies. In particular, their almost onedimensional geometry and finely tunable composition make them interesting materials for developing novel lasing devices. However, electrospinning techniques rely on a large variety of parameters and possible experimental geometries, and they need to be carefully optimized in order to obtain suitable topographical and photonic properties in the resulting nanostructures. Targeted features include smooth and uniform fiber surface, dimensional control, as well as filament alignment, enhanced light emission, and stimulated emission. We here present various optimization strategies for electrospinning methods which have been implemented and developed by us for the realization of lasing architectures based on polymer nanofibers. The geometry of the resulting nanowires leads to peculiar light-scattering from spun filaments, and to controllable lasing characteristics.
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Arai, Hiroaki; Suzuki, Tatsuya; Kaseda, Chosei; Takayama, Kozo
2009-06-01
The optimal solutions of theophylline tablet formulations based on datasets from 4 experimental designs (Box and Behnken design, central composite design, D-optimal design, and full factorial design) were calculated by the response surface method incorporating multivariate spline interpolation (RSM(S)). Reliability of these solutions was evaluated by a bootstrap (BS) resampling technique. The optimal solutions derived from the Box and Behnken design, D-optimal design, and full factorial design dataset were similar. The distributions of the BS optimal solutions calculated for these datasets were symmetrical. Thus, the accuracy and the reproducibility of the optimal solutions enabled quantitative evaluation based on the deviations of these distributions. However, the distribution of the BS optimal solutions calculated for the central composite design dataset were almost unsymmetrical, and the basic statistic of these distributions could not be conducted. The reason for this problem was considered to be the mixing of the global and local optima. Therefore, self-organizing map (SOM) clustering was applied to identify the global optimal solutions. The BS optimal solutions were divided into 4 clusters by SOM clustering, the accuracy and reproducibility of the optimal solutions in each cluster were quantitatively evaluated, and the cluster containing the global optima was identified. Therefore, SOM clustering was considered to reinforce the BS resampling method for the evaluation of the reliability of optimal solutions irrespective of the dataset style.
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Psoriasis pathogenesis - Pso p27 is generated from SCCA1 with chymase.
Lysvand, Hilde; Hagen, Lars; Klubicka, Lidija; Slupphaug, Geir; Iversen, Ole-Jan
2014-05-01
Psoriasis is a chronic inflammatory skin disease with unknown aetiology. Infiltration of inflammatory cells as the initial event in the development of new psoriatic plaques together with the defined inflamed areas of such lesions argues for an immunological disease with a local production of a causal antigen. The auto-antigen Pso p27 is a protein expressed in the skin lesions. We recently demonstrated that Pso p27 is homologous to the core amino acid sequences of squamous cell carcinoma antigens 1 and 2 (SCCA1/2) and it is apparently generated from SCCA molecules by digestion with highly specific endoproteases. In this communication we demonstrate the generation of Pso p27 from SCCA1 with extracts from psoriatic scale and even more remarkably, the generation of Pso p27 from SCCA1 in the presence of mast cell associated chymase. These findings open up for new therapeutic strategies in psoriasis and probably also in other autoimmune diseases as Pso p27 epitopes have been detected in diseased tissues from patients with various chronic inflammatory diseases.
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Bhattacharjee, Deblina; Paul, Anand; Kim, Jeong Hong; Kim, Mucheol
2016-01-01
The analysis of leukocyte images has drawn interest from fields of both medicine and computer vision for quite some time where different techniques have been applied to automate the process of manual analysis and classification of such images. Manual analysis of blood samples to identify leukocytes is time-consuming and susceptible to error due to the different morphological features of the cells. In this article, the nature-inspired plant growth simulation algorithm has been applied to optimize the image processing technique of object localization of medical images of leukocytes. This paper presents a random bionic algorithm for the automated detection of white blood cells embedded in cluttered smear and stained images of blood samples that uses a fitness function that matches the resemblances of the generated candidate solution to an actual leukocyte. The set of candidate solutions evolves via successive iterations as the proposed algorithm proceeds, guaranteeing their fit with the actual leukocytes outlined in the edge map of the image. The higher precision and sensitivity of the proposed scheme from the existing methods is validated with the experimental results of blood cell images. The proposed method reduces the feasible sets of growth points in each iteration, thereby reducing the required run time of load flow, objective function evaluation, thus reaching the goal state in minimum time and within the desired constraints.
Improving the performance of mass-consistent numerical models using optimization techniques
Barnard, J.C.; Wegley, H.L.; Hiester, T.R.
1985-09-01
This report describes a technique of using a mass-consistent model to derive wind speeds over a microscale region of complex terrain. A serious limitation in the use of these numerical models is that the calculated wind field is highly sensitive to some input parameters, such as those specifying atmospheric stability. Because accurate values for these parameters are not usually known, confidence in the calculated winds is low. However, values for these parameters can be found by tuning the model to existing wind observations within a microscale area. This tuning is accomplished by using a single-variable, unconstrained optimization procedure that adjusts the unknown parameters so that the error between the observed winds and model calculations of these winds is minimized. Model verification is accomplished by using eight sets of hourly averaged wind data. These data are obtained from measurements made at approximately 30 sites covering a wind farm development in the Altamont Pass area. When the model is tuned to a small subset of the 30 sites, an accurate determination of the wind speeds was made for the remaining sites in six of the eight cases. (The two that failed were low wind speed cases.) Therefore, when this technique is used, numerical modeling shows great promise as a tool for microscale siting of wind turbines in complex terrain.
NASA Astrophysics Data System (ADS)
Eslami, Babak
The overall goals of this project are (i) to improve the current dynamic modes of atomic force microscopy (AFM) with the focus of multifrequency AFM measurements on soft matters in ambient air and liquid environments and (ii) to develop a new methodology for mechanically characterizing the subsurface of soft samples, allowing users to gradually, controllably and reversibly reveal features that are buried under the surface. This dissertation includes a wide range of studies on multifrequency atomic force microscopy. Firstly, the imaging parameters (drive amplitude and frequency) of each eigenmode is studied, optimized based on the observables. Secondly, a new mutltifrequency AFM technique with capability of imaging subsurface features has been developed and verified through experiments. Based on the first goal of the project, an experimental protocol to select excitation frequency in air for single tapping mode and bimodal AFM are provided. Additionally, a rigorous guideline for the selection of drive frequency in ambient air, liquid environment based on the energy quantities and slope of the cantilever's phase response is established. Finally, an advantage of using higher and stiffer eigenmodes for imaging soft matters has been proposed and verified experimentally. By this technique, subsurface imaging capabilities of AFM are expanded.
Arnold, Heinz J P; Müller, Marcus; Waldhaus, Jörg; Hahn, Hartmut; Löwenheim, Hubert
2010-02-01
Whole-organ culture of a sensory organ in a rotating wall vessel bioreactor provides a powerful in vitro model for physiological and pathophysiological investigation as previously demonstrated for the postnatal inner ear. The model is of specific relevance as a tool for regeneration research. In the immature inner ear explant, the density was only 1.29 g/cm(3). The high density of 1.68 g/cm(3) of the functionally mature organ resulted in enhanced settling velocity and deviation from its ideal circular orbital path causing enhanced shear stress. The morphometric and physical properties, as well as the dynamic motion patterns of explants, were analyzed and numerically evaluated by an orbital path index. Application of a novel buoyancy bead technique resulted in a 6.5- to 14.8-fold reduction of the settling velocity. The deviation of the explant from its ideal circular orbital path was adjusted as indicated by an optimum value for the orbital path index (-1.0). Shear stress exerted on the inner ear explant was consequently reduced 6.4- to 15.0-fold. The culture conditions for postnatal stages were optimized, and the preconditions for transferring this in vitro model toward mature high-density stages established. This buoyancy technique may also be useful in tissue engineering of other high-density structures.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
Improved Particle Swarm Optimization for Global Optimization of Unimodal and Multimodal Functions
NASA Astrophysics Data System (ADS)
Basu, Mousumi
2016-12-01
Particle swarm optimization (PSO) performs well for small dimensional and less complicated problems but fails to locate global minima for complex multi-minima functions. This paper proposes an improved particle swarm optimization (IPSO) which introduces Gaussian random variables in velocity term. This improves search efficiency and guarantees a high probability of obtaining the global optimum without significantly impairing the speed of convergence and the simplicity of the structure of particle swarm optimization. The algorithm is experimentally validated on 17 benchmark functions and the results demonstrate good performance of the IPSO in solving unimodal and multimodal problems. Its high performance is verified by comparing with two popular PSO variants.
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry
2008-11-01
Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
Anatomy-based transmission factors for technique optimization in portable chest x-ray
NASA Astrophysics Data System (ADS)
Liptak, Christopher L.; Tovey, Deborah; Segars, William P.; Dong, Frank D.; Li, Xiang
2015-03-01
Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.
Optimierung von FSS-Bandpassfiltern mit Hilfe der Schwarmintelligenz (Particle Swarm Optimization)
NASA Astrophysics Data System (ADS)
Wu, G.; Hansen, V.; Kreysa, E.; Gemünd, H.-P.
2006-09-01
In diesem Beitrag wird ein neues Verfahren zur Optimierung von Bandpassfiltern aus mehrlagigen frequenzselektiven Schirmen (FSS), die in ein Dielektrikum eingebettet sind, vorgestellt. Das Ziel ist es, die Parameter der gesamten Struktur so zu optimieren, dass ihre Transmissionseigenschaften hohe Filteranforderungen erfüllen. Als Optimierungsverfahren wird die Particle Swarm Optimization (PSO) eingesetzt. PSO ist eine neue stochastische Optimierungsmethode, die in verschieden Gebieten, besonders aber bei der Optimierung nicht linearer Probleme mit mehreren Zielfunktionen erfolgreich eingesetzt wird. In dieser Arbeit wird die PSO in die Spektralbereichsanalyse zur Berechnung komplexer FSS-Strukturen integriert. Die numerische Berechnung basiert auf einer Integralgleichungsformulierung mit Hilfe der spektralen Greenschen Funktion für geschichtete Strukturen. This paper presents a novel procedure for the optimization of band-pass filters consisting of frequency selective surfaces (FSS) embedded in a dielectric. The aim is to optimize the parameters of the complete structure so that the transmission characteristics of the filters fulfill the demanding requirements. The Particle Swarm Optimization (PSO) is used as the optimization procedure. PSO is a new stochastic optimization method that is successfully applied in different areas for the optimization of non-linear problems with several object-functions. In this work, PSO is integrated into the spectral domain analysis for the calculation of the complex FSS structures. The numerical computation is based on the formulation of an integral equation with the help of the spectral Green's function for layered media.
NASA Astrophysics Data System (ADS)
Hou, Rui; Yu, Junle
2011-12-01
Optical burst switching (OBS) has been regarded as the next generation optical switching technology. In this paper, the routing problem based on particle swarm optimization (PSO) algorithm in OBS has been studies and analyzed. Simulation results indicate that, the PSO based routing algorithm will optimal than the conversional shortest path first algorithm in space cost and calculation cost. Conclusions have certain theoretical significances for the improvement of OBS routing protocols.
Optimal Design of MPPT Controllers for Grid Connected Photovoltaic Array System
NASA Astrophysics Data System (ADS)
Ebrahim, M. A.; AbdelHadi, H. A.; Mahmoud, H. M.; Saied, E. M.; Salama, M. M.
2016-10-01
Integrating photovoltaic (PV) plants into electric power system exhibits challenges to power system dynamic performance. These challenges stem primarily from the natural characteristics of PV plants, which differ in some respects from the conventional plants. The most significant challenge is how to extract and regulate the maximum power from the sun. This paper presents the optimal design for the most commonly used Maximum Power Point Tracking (MPPT) techniques based on Proportional Integral tuned by Particle Swarm Optimization (PI-PSO). These suggested techniques are, (1) the incremental conductance, (2) perturb and observe, (3) fractional short circuit current and (4) fractional open circuit voltage techniques. This research work provides a comprehensive comparative study with the energy availability ratio from photovoltaic panels. The simulation results proved that the proposed controllers have an impressive tracking response. The system dynamic performance improved greatly using the proposed controllers.
Shah, Chirag; Vicini, Frank A.
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
NASA Astrophysics Data System (ADS)
McNally-Heintzelman, Karen M.; Dawes, Judith M.; Lauto, Antonio; Parker, Anthony E.; Owen, Earl R.; Piper, James A.
1998-01-01
. This study demonstrates the feasibility of the laser-solder repair technique for nerve anastomosis resulting in improved tensile strength. The welding temperature required to achieve optimal tensile strength has been identified.
Parameter tuning of PVD process based on artificial intelligence technique
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In this study, an artificial intelligence technique is proposed to be implemented in the parameter tuning of a PVD process. Due to its previous adaptation in similar optimization problems, genetic algorithm (GA) is selected to optimize the parameter tuning of the RF magnetron sputtering process. The most optimized parameter combination obtained from GA's optimization result is expected to produce the desirable zinc oxide (ZnO) thin film from the sputtering process. The parameters involved in this study were RF power, deposition time and substrate temperature. The algorithm was tested to optimize the 25 datasets of parameter combinations. The results from the computational experiment were then compared with the actual result from the laboratory experiment. Based on the comparison, GA had shown that the algorithm was reliable to optimize the parameter combination before the parameter tuning could be done to the RF magnetron sputtering machine. In order to verify the result of GA, the algorithm was also been compared to other well known optimization algorithms, which were, particle swarm optimization (PSO) and gravitational search algorithm (GSA). The results had shown that GA was reliable in solving this RF magnetron sputtering process parameter tuning problem. GA had shown better accuracy in the optimization based on the fitness evaluation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-17
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... relinquishment from Peminic Inc. dba The Peminic-Greeley PSO of its status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109-41, 42...
NASA Technical Reports Server (NTRS)
MacKay, Rebecca A.; Locci, Ivan E.; Garg, anita; Ritzert, Frank J.
2002-01-01
is a three-phase constituent composed of TCP and stringers of gamma phase in a matrix of gamma prime. An incoherent grain boundary separates the SRZ from the gammagamma prime microstructure of the superalloy. The SRZ is believed to form as a result of local chemistry changes in the superalloy due to the application of the diffusion aluminide bondcoat. Locally high surface stresses also appear to promote the formation of the SRZ. Thus, techniques that change the local alloy chemistry or reduce surface stresses have been examined for their effectiveness in reducing SRZ. These SRZ-reduction steps are performed on the test specimen or the turbine blade before the bondcoat is applied. Stressrelief heat treatments developed at NASA Glenn have been demonstrated to reduce significantly the amount of SRZ that develops during subsequent high-temperature exposures. Stress-relief heat treatments reduce surface stresses by recrystallizing a thin surface layer of the superalloy. However, in alloys with very high propensities to form SRZ, stress relief heat treatments alone do not eliminate SRZ entirely. Thus, techniques that modify the local chemistry under the bondcoat have been emphasized and optimized successfully at Glenn. One such technique is carburization, which changes the local chemistry by forming submicron carbides near the surface of the superalloy. Detailed characterizations have demonstrated that the depth and uniform distribution of these carbides are enhanced when a stress relief treatment and an appropriate surface preparation are employed in advance of the carburization treatment. Even in alloys that have the propensity to develop a continuous SRZ layer beneath the diffusion zone, the SRZ has been completely eliminated or reduced to low, manageable levels when this combination of techniques is utilized. Now that the techniques to mitigate SRZ have been established at Glenn, TCP phase formation is being emphasized in ongoing work under the UEET Program. The
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
NASA Astrophysics Data System (ADS)
Tsampas, P.; Roditis, G.; Papadimitriou, V.; Chatzakos, P.; Gan, Tat-Hean
2013-05-01
Increasing demand in mobile, autonomous devices has made energy harvesting a particular point of interest. Systems that can be powered up by a few hundreds of microwatts could feature their own energy extraction module. Energy can be harvested from the environment close to the device. Particularly, the ambient mechanical vibrations conversion via piezoelectric transducers is one of the most investigated fields for energy harvesting. A technique for optimized energy harvesting using piezoelectric actuators called "Synchronized Switching Harvesting" is explored. Comparing to a typical full bridge rectifier, the proposed harvesting technique can highly improve harvesting efficiency, even in a significantly extended frequency window around the piezoelectric actuator's resonance. In this paper, the concept of design, theoretical analysis, modeling, implementation and experimental results using CEDRAT's APA 400M-MD piezoelectric actuator are presented in detail. Moreover, we suggest design guidelines for optimum selection of the storage unit in direct relation to the characteristics of the random vibrations. From a practical aspect, the harvesting unit is based on dedicated electronics that continuously sense the charge level of the actuator's piezoelectric element. When the charge is sensed, to come to a maximum, it is directed to speedily flow into a storage unit. Special care is taken so that electronics operate at low voltages consuming a very small amount of the energy stored. The final prototype developed includes the harvesting circuit implemented with miniaturized, low cost and low consumption electronics and a storage unit consisting of a super capacitors array, forming a truly self-powered system drawing energy from ambient random vibrations of a wide range of characteristics.
Hedegaard, R.F.; Ho, J.; Eisert, J.
1996-12-31
Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.
Optimization strategies for evaluation of brain hemodynamic parameters with qBOLD technique.
Wang, Xiaoqi; Sukstanskii, Alexander L; Yablonskiy, Dmitriy A
2013-04-01
Quantitative blood oxygenation level dependent technique provides an MRI-based method to measure tissue hemodynamic parameters such as oxygen extraction fraction and deoxyhemoglobin-containing (veins and prevenous part of capillaries) cerebral blood volume fraction. It is based on a theory of MR signal dephasing in the presence of blood vessel network and experimental method-gradient echo sampling of spin echo previously proposed and validated on phantoms and animals. In vivo human studies also demonstrated feasibility of this approach but also recognized that obtaining reliable results requires high signal-to-noise ratio in the data. In this paper, we analyze in detail the uncertainties of the quantitative blood oxygenation level dependent parameter estimates in the framework of the Bayesian probability theory, namely, we examine how the estimated parameters oxygen extraction fraction and deoxygenated cerebral blood volume fraction depend on their "true values," signal-to-noise ratio, and data sampling strategies. On the basis of this analysis, we develop strategies for optimization of the quantitative blood oxygenation level dependent technique for deoxygenated cerebral blood volume and oxygen extraction fraction evaluation. In particular, it is demonstrated that the use of gradient echo sampling of spin echo sequence allows substantial decrease of measurement errors as the data are acquired on both sides of spin echo. We test our theory on phantom mimicking the structure of blood vessel network. A 3D gradient echo sampling of spin echo pulse sequence is used for the acquisition of the MRI signal that was subsequently analyzed by Bayesian Application Software. The experimental results demonstrated a good agreement with theoretical predictions.
Order-2 Stability Analysis of Particle Swarm Optimization.
Liu, Qunfeng
2015-01-01
Several stability analyses and stable regions of particle swarm optimization (PSO) have been proposed before. The assumption of stagnation and different definitions of stability are adopted in these analyses. In this paper, the order-2 stability of PSO is analyzed based on a weak stagnation assumption. A new definition of stability is proposed and an order-2 stable region is obtained. Several existing stable analyses for canonical PSO are compared, especially their definitions of stability and the corresponding stable regions. It is shown that the classical stagnation assumption is too strict and not necessary. Moreover, among all these definitions of stability, it is shown that our definition requires the weakest conditions, and additional conditions bring no benefit. Finally, numerical experiments are reported to show that the obtained stable region is meaningful. A new parameter combination of PSO is also shown to be good, even better than some known best parameter combinations.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1989-01-01
Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.
Sánchez Navarro, A
2005-09-01
The aim of this work is to provide a methodology to predict the potential efficacy of standard dosage schedules established for antimicrobials when used in clinical practice and administered to patients with different demographic characteristics. It is based on the application of pharmacokinetic and pharmacodynamic criteria (PK/PD analysis) to optimize dosification of this type of drug. Pharmacokinetic parameters such as the area under the plasma concentration-time curve (AUC) or maximum plasma concentration (Cmax) can be estimated from population kinetic models for each type of patient. Microbiological information, such as the MIC value, is also required. Using the above mentioned information and applying the Montecarlo simulation technique the probability of achieving the recommended value of a substituted variable related to efficacy may be estimated. The proposed methodology has been applied to levofloxacin when reportedly administered to patients showing different characteristics. The results reveal that this method allows us to know a priori whether or not the standard dosage is appropriate for a particular patient for whom the treatment is indicated. In summary, the proposed methodology provides us with a strategy for dosage individualization of antimicrobial agents that can be applied before initiating the treatment with no need for monitoring drug concentration, leading to an increase of clinical efficacy as well as a decreased risk of resistance development.
NASA Technical Reports Server (NTRS)
Granaas, Michael M.; Rhea, Donald C.
1989-01-01
In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.
Optimal design of 850 nm 2×2 multimode interference polymer waveguide coupler by imprint technique
NASA Astrophysics Data System (ADS)
Shao, Yuchen; Han, Xiuyou; Han, Xiaonan; Lu, Zhili; Wu, Zhenlin; Teng, Jie; Wang, Jinyan; Morthier, Geert; Zhao, Mingshan
2016-09-01
A 2×2 optical waveguide coupler at 850 nm based on the multimode interference (MMI) structure with the polysilsesquioxanes liquid series (PSQ-Ls) polymer material and the imprint technique is presented. The influence of the structural parameters, such as the single mode condition, the waveguide spacing of input/output ports, and the width and length of the multimode waveguide, on the optical splitting performance including the excess loss and the uniformity is simulated by the beam propagation method. By inserting a taper section of isosceles trapezoid between the single mode and multimode waveguides, the optimized structural parameters for low excess loss and high uniformity are obtained with the excess loss of‒0.040 dB and the uniformity of‒0.007 dB. The effect of the structure deviations induced during the imprint process on the optical splitting performance at different residual layer thicknesses is also investigated. The analysis results provide useful instructions for the waveguide device fabrication.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2016-07-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Hua; Sheng, Zheng; Shi, Han-Qing
2015-01-01
Estimating refractivity profiles from radar sea clutter is a complex nonlinear optimization problem. To deal with the ill-posed difficulties, an inversion algorithm, particle swarm optimization with a Lévy flight (LPSO), was proposed to be applied in the refractivity from clutter (RFC) technique to retrieve atmospheric duct in this paper. PSO has many advantages in solving continuous optimization problems, while in its late period it has slow convergence speed and low precision. Therefore, we integrate the Lévy flights into the standard PSO algorithm to improve the precision and enhance the capability of jumping out of the local optima. To verify the feasibility and validation of the LPSO for estimating atmospheric duct parameters based on the RFC method, the synthetic and Wallops98 experimental data are implemented. Numerical experiments demonstrate that the optimal solutions obtained from the hybrid algorithm are more precise and efficient. Additionally, to test the algorithm inversion performance, the antinoise ability of LPSO is analyzed. The results indicate that the LPSO algorithm has a certain antinoise ability. Finally, according to the experiment results, it can be concluded that the LPSO algorithm can provide a more precise and efficient method for near-real-time inversion of atmospheric refractivity from radar clutter.
NASA Astrophysics Data System (ADS)
Rozhkov, Mikhail; Bobrov, Dmitry; Kitov, Ivan
2014-05-01
The Master Event technique is a powerful tool for Expert Technical Analysis within the CTBT framework as well as for real-time monitoring with the waveform cross-correlation (CC) (match filter) approach. The primary goal of CTBT monitoring is detection and location of nuclear explosions. Therefore, the cross-correlation monitoring should be focused on finding such events. The use of physically adequate waveform templates may significantly increase the number of valid, both natural and manmade, events in the Reviewed Event Bulletin (REB) of the International Data Centre. Inadequate templates for master events may increase the number of CTBT irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event template simulation and narrowing the classes of CC templates used in detection and location process based on principal and independent component analysis (PCA and ICA). Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared. The prototype of CC-based Global Grid monitoring system developed in IDC during last year was populated with different hybrid waveform templates (synthetics, synthetics components, and real components) and its performance was assessed with the world seismicity data flow, including the DPRK-2013 event. The specific features revealed in this study for the P-waves from the DPRK underground nuclear explosions (UNEs) can reduce the global detection threshold of seismic monitoring under the CTBT by 0.5 units of magnitude. This corresponds to the reduction in the test yield by a
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
78 FR 6819 - Patient Safety Organizations: Voluntary Relinquishment From the BREF PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-31
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary Relinquishment From the BREF PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of delisting. SUMMARY: The Patient Safety and Quality Improvement Act of 2005 (Patient Safety...
78 FR 70560 - Patient Safety Organizations: Voluntary Relinquishment From GE-PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... Quality Patient Safety Organizations: Voluntary Relinquishment From GE- PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of delisting. SUMMARY: The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109-41, 42 U.S.C. 299b-21-b-26, provides for...
76 FR 60494 - Patient Safety Organizations: Voluntary Relinquishment From HPI-PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary..., a component entity of Healthcare Performance Improvement, LLC, of its status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law...
Affiliation, joint venture or PSO? Case studies show why provider strategies differ.
1998-03-01
Joint venture, affiliation or PSO? Here are three case studies of providers who chose different paths under Medicare risk, plus some key questions you'll want to ask of your own provider organization. Learn from these examples so you'll make the best contracting decisions.
76 FR 7853 - Patient Safety Organizations: Voluntary Delisting From HealthDataPSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... (Patient Safety Act), Public Law 109-41, 42 U.S.C. 299b-21--b-26, provides for the formation of PSOs, which... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... from HealthDataPSO, a component entity of CCD Healthsystems and Medical Error Management, LLC, of...
76 FR 7854 - Patient Safety Organizations: Voluntary Delisting From Quality Excellence, Inc./PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... Medical Care, of its status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109-41, 42 U.S.C. 299b-21--b-26, provides for the... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations:...
NASA Astrophysics Data System (ADS)
Garcia, D. Vincent Romero
The control of the acoustical properties of the sonic crystals (SC) needs the study of both the distribution of the scatterers in the structure and the intrinsic acoustical properties of the scatterers. In this work an exhaustive analysis of the distribution of the scatterers as well as the improvement of the acoustical properties of the SC made of scatterers with absorbent and/or resonant properties is presented. Both procedures, working together or independently, provide real possibilities to control the propagation of acoustic waves through SC. From the theoretical point of view, the wave propagation through periodic and quasiperiodic structures has been analysed by means of the multiple scattering theory, the plane wave expansion and the finite elements method. A novel extension of the plane wave expansion allowing the complex relation dispersion for SC is presented in this work. This technique complements the provided information using the classical methods and it allows us to analyse the evanescent behaviour of the modes inside of the band gaps as well as the evanescent behaviour of localized modes around the point defects in SC. The necessity of accurate measurements of the acoustical properties of the SC has motivated the development of a novel three-dimensional acquisition system that synchronises the motion of the receiver and acquisition of the temporal signals. A good agreement between the theoretical and experimental data is shown in this work. The joint work between the optimized structures of scatterers and the intrinsic properties of the scatterers themselves is applied to generate devices that present wide ranges of attenuated frequencies. These systems are presented as an alternative to the classic acoustic barrier where the propagation of waves through SC can be controlled. The results help to correctly understand the behaviour of SC for the localization of sound and for the design of both wave guides and acoustic filters.
A Dynamic Optimization Technique for Siting the NASA-Clark Atlanta Urban Rain Gauge Network (NCURN)
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Taylor, Layi
2003-01-01
NASA satellites and ground instruments have indicated that cities like Atlanta, Georgia may create or alter rainfall. Scientists speculate that the urban heat island caused by man-made surfaces in cities impact the heat and wind patterns that form clouds and rainfall. However, more conclusive evidence is required to substantiate findings from satellites. NASA, along with scientists at Clark Atlanta University, are implementing a dense, urban rain gauge network in the metropolitan Atlanta area to support a satellite validation program called Studies of PRecipitation Anomalies from Widespread Urban Landuse (SPRAWL). SPRAWL will be conducted during the summer of 2003 to further identify and understand the impact of urban Atlanta on precipitation variability. The paper provides an. overview of SPRAWL, which represents one of the more comprehensive efforts in recent years to focus exclusively on urban-impacted rainfall. The paper also introduces a novel technique for deploying rain gauges for SPRAWL. The deployment of the dense Atlanta network is unique because it utilizes Geographic Information Systems (GIS) and Decision Support Systems (DSS) to optimize deployment of the rain gauges. These computer aided systems consider access to roads, drainage systems, tree cover, and other factors in guiding the deployment of the gauge network. GIS and DSS also provide decision-makers with additional resources and flexibility to make informed decisions while considering numerous factors. Also, the new Atlanta network and SPRAWL provide a unique opportunity to merge the high-resolution, urban rain gauge network with satellite-derived rainfall products to understand how cities are changing rainfall patterns, and possibly climate.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2015-01-01
The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Diesel Engine performance improvement in a 1-D engine model using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Karra, Prashanth
2015-12-01
A particle swarm optimization (PSO) technique was implemented to improve the engine development and optimization process to simultaneously reduce emissions and improve the fuel efficiency. The optimization was performed on a 4-stroke 4-cylinder GT-Power based 1-D diesel engine model. To achieve the multi-objective optimization, a merit function was defined which included the parameters to be optimized: Nitrogen Oxides (NOx), Nonmethyl hydro carbons (NMHC), Carbon Monoxide (CO), Brake Specific Fuel Consumption (BSFC). EPA Tier 3 emissions standards for non-road diesel engines between 37 and 75 kW of output were chosen as targets for the optimization. The combustion parameters analyzed in this study include: Start of main Injection, Start of Pilot Injection, Pilot fuel quantity, Swirl, and Tumble. The PSO was found to be very effective in quickly arriving at a solution that met the target criteria as defined in the merit function. The optimization took around 40-50 runs to find the most favourable engine operating condition under the constraints specified in the optimization. In a favourable case with a high merit function values, the NOx+NMHC and CO values were reduced to as low as 2.9 and 0.014 g/kWh, respectively. The operating conditions at this point were: 10 ATDC Main SOI, -25 ATDC Pilot SOI, 0.25 mg of pilot fuel, 0.45 Swirl and 0.85 tumble. These results indicate that late main injections preceded by a close, small pilot injection are most favourable conditions at the operating condition tested.
NASA Astrophysics Data System (ADS)
Somasundaram, P.; Muthuselvan, N. B.
This paper presents new computationally efficient improved Particle Swarm algorithms for solving Security Constrained Optimal Power Flow (SCOPF) in power systems with the inclusion of FACTS devices. The proposed algorithms are developed based on the combined application of Gaussian and Cauchy Probability distribution functions incorporated in Particle Swarm Optimization (PSO). The power flow algorithm with the presence of Static Var Compensator (SVC) Thyristor Controlled Series Capacitor (TCSC) and Unified Power Flow Controller (UPFC), has been formulated and solved. The proposed algorithms are tested on standard IEEE 30-bus system. The analysis using PSO and modified PSO reveals that the proposed algorithms are relatively simple, efficient, reliable and suitable for real-time applications. And these algorithms can provide accurate solution with fast convergence and have the potential to be applied to other power engineering problems.
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
Shkumat, N. A.; Siewerdsen, J. H.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.
2008-02-15
Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp{sup L}/kVp{sup H}]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5--i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis)
Shkumat, N A; Siewerdsen, J H; Richard, S; Paul, N S; Yorkston, J; Van Metter, R
2008-02-01
Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp(L)/kVp(H)]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5-i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis).
Hepatocytes cultured in alginate microspheres: an optimized technique to study enzyme induction.
Ringel, M; von Mach, M A; Santos, R; Feilen, P J; Brulport, M; Hermes, M; Bauer, A W; Schormann, W; Tanner, B; Schön, M R; Oesch, F; Hengstler, J G
2005-01-05
) the solid alginate microspheres can be liquefied within 60s, allowing a fast and complete harvest of hepatocytes; (ii) alginate capsules are stable allowing transport and mechanical stress; (iii) high numbers of hepatocytes can be encapsulated in short periods; (iv) defined cell numbers between 600 hepatocytes, the approximate number of cells in one capsule, and 18 x 10(6) hepatocytes, the number of hepatocytes in 6 ml alginate, can be transferred to a culture dish or flask. Thus, encapsulated hepatocytes allow a flexible organization of experiments with respect to cell number. In conclusion, we optimized a technique for encapsulation of hepatocytes in alginate microspheres that allows identification of enzyme induction with an improved sensitivity compared to existing systems.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
2007-09-01
Objective Space ( Non - Convex Front) . . 6 2.1 Optimization Lineage . . . . . . . . . . . . . . . . . . . . . . . 12 3.1 R&S Algorithm...satisfy first-order necessary conditions for Pareto optimality if there exists no feasible direc- tion d ∈ Rn such that ∇Fk(x∗)Td ≤ 0 for all k = 1, 2...Ω. Then x̂ meets the first-order necessary conditions (in the forms listed below) for optimality a.s.: 61 • If f is Lipschitz near x̂, then x̂ is a
Kassem, Mohamed A A; ElMeshad, Aliaa N; Fares, Ahmed R
2016-08-09
Lacidipine (LCDP) is a highly lipophilic calcium channel blocker of poor aqueous solubility leading to poor oral absorption. This study aims to prepare and optimize LCDP nanosuspensions using antisolvent sonoprecipitation technique to enhance the solubility and dissolution of LCDP. A three-factor, three-level Box-Behnken design was employed to optimize the formulation variables to obtain LCDP nanosuspension of small and uniform particle size. Formulation variables were as follows: stabilizer to drug ratio (A), sodium deoxycholate percentage (B), and sonication time (C). LCDP nanosuspensions were assessed for particle size, zeta potential, and polydispersity index. The formula with the highest desirability (0.969) was chosen as the optimized formula. The values of the formulation variables (A, B, and C) in the optimized nanosuspension were 1.5, 100%, and 8 min, respectively. Optimal LCDP nanosuspension had particle size (PS) of 273.21 nm, zeta potential (ZP) of -32.68 mV and polydispersity index (PDI) of 0.098. LCDP nanosuspension was characterized using x-ray powder diffraction, differential scanning calorimetry, and transmission electron microscopy. LCDP nanosuspension showed saturation solubility 70 times that of raw LCDP in addition to significantly enhanced dissolution rate due to particle size reduction and decreased crystallinity. These results suggest that the optimized LCDP nanosuspension could be promising to improve oral absorption of LCDP.
Expedite Particle Swarm Optimization Algorithm (EPSO) for Optimization of MSA
NASA Astrophysics Data System (ADS)
Rathi, Amit; Vijay, Ritu
This paper presents a new designing method of Rectangular patch Microstrip Antenna using an Artificial searches Algorithm with some constraints. It requires two stages for designing. In first stage, bandwidth of MSA is modeled using bench Mark function. In second stage, output of first stage give to modified Artificial search Algorithm which is Particle Swarm Algorithm (PSO) as input and get output in the form of five parameter- dimensions width, frequency range, dielectric loss tangent, length over a ground plane with a substrate thickness and electrical thickness. In PSO Cognition, factor and Social learning Factor give very important effect on balancing the local search and global search in PSO. Basing the modification of cognition factor and social learning factor, this paper presents the strategy that at the starting process cognition-learning factor has more effect then social learning factor. Gradually social learning factor has more impact after learning cognition factor for find out global best. The aim is to find out under above circumstances these modifications in PSO can give better result for optimization of microstrip Antenna (MSA).
Optimizing SLN and NLC by 2(2) full factorial design: effect of homogenization technique.
Severino, Patrícia; Santana, Maria Helena A; Souto, Eliana B
2012-08-01
Solid lipid nanoparticles (SLN) and nanostructured lipid carrier (NLC) have been employed in pharmaceutics and biomedical formulations. The present study focuses on the optimization of the production process of SLN and NLC by High Shear Homogenization (HSH) and High Pressure Homogenization (HPH). To build up the surface response charts, a 2(2) full factorial design based on 2 independent variables was used to obtain an optimized formulation. The effects of the production process on the mean particle size, polydispersity index (PI) and zeta potential (ZP) were investigated. Optimized SLN were produced applying 20,000 rpm HSH and 500 bar HPH pressure and NLC process 15,000 rpm HSH and 700 bar HPH pressure, respectively. This factorial design study has proven to be a useful tool in optimizing SLN (~100 nm) and NLC (~300 nm) formulations. The present results highlight the benefit of applying statistical designs in the preparation of lipid nanoparticles.
An analytic study of near terminal area optimal sequencing and flow control techniques
NASA Technical Reports Server (NTRS)
Park, S. K.; Straeter, T. A.; Hogge, J. E.
1973-01-01
Optimal flow control and sequencing of air traffic operations in the near terminal area are discussed. The near terminal area model is based on the assumptions that the aircraft enter the terminal area along precisely controlled approach paths and that the aircraft are segregated according to their near terminal area performance. Mathematical models are developed to support the optimal path generation, sequencing, and conflict resolution problems.
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
Čabala, Radomír; Bursová, Miroslava
2012-03-23
We have developed a new microextraction technique for equilibrium, non-exhaustive analyte preconcentration from aqueous solutions into organic solvents lighter than water. The key point of the method is application of specially designed and optimized bell-shaped extraction device, BSED. The technique has been tested and applied to the preconcentration of selected volatile and semi volatile compounds which were determined by gas chromatography/mass spectrometry in spiked water samples. The significant parameters of the extraction have been found using chemometric procedures and these parameters were optimized using the central composite design (CCD) for two solvents. The analyte preconcentration factors were in a range from 8.3 to 161.8 (repeatability from 7 to 14%) for heptane, and 50.0-105.0 (repeatability from 0 to 5%) for tert-butyl acetate. The reproducibility of the technique was within 1-8%. The values of limits of detection and determination were 0.1-3.3 ng mL(-1) for heptane and 0.3-10.7 ng mL(-1) for tert-butyl acetate. The new microextraction technique has been found to be a cheap, simple and flexible alternative to the common procedures, such as SPME or LLME. This BSED-LLME technique can also be combined with other separation methods, e.g., HPLC or CE.
NASA Astrophysics Data System (ADS)
Rahman, Md Ashiqur; Anwar, Sohel; Izadian, Afshin
2016-03-01
In this paper, a gradient-free optimization technique, namely particle swarm optimization (PSO) algorithm, is utilized to identify specific parameters of the electrochemical model of a Lithium-Ion battery with LiCoO2 cathode chemistry. Battery electrochemical model parameters are subject to change under severe or abusive operating conditions resulting in, for example, over-discharged battery, over-charged battery, etc. It is important for a battery management system to have these parameter changes fully captured in a bank of battery models that can be used to monitor battery conditions in real time. Here the PSO methodology has been successfully applied to identify four electrochemical model parameters that exhibit significant variations under severe operating conditions: solid phase diffusion coefficient at the positive electrode (cathode), solid phase diffusion coefficient at the negative electrode (anode), intercalation/de-intercalation reaction rate at the cathode, and intercalation/de-intercalation reaction rate at the anode. The identified model parameters were used to generate the respective battery models for both healthy and degraded batteries. These models were then validated by comparing the model output voltage with the experimental output voltage for the stated operating conditions. The identified Li-Ion battery electrochemical model parameters are within reasonable accuracy as evidenced by the experimental validation results.
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum
Bang, Soonam; Heo, Joon; Han, Soohee; Sohn, Hong-Gyoo
2010-01-01
Infiltration-route analysis is a military application of geospatial information system (GIS) technology. In order to find susceptible routes, optimal-path-searching algorithms are applied to minimize the cost function, which is the summed result of detection probability. The cost function was determined according to the thermal observation device (TOD) detection probability, the viewshed analysis results, and two feature layers extracted from the vector product interim terrain data. The detection probability is computed and recorded for an individual cell (50 m × 50 m), and the optimal infiltration routes are determined with A* algorithm by minimizing the summed costs on the routes from a start point to an end point. In the present study, in order to simulate the dynamic nature of a real-world problem, one thousand cost surfaces in the GIS environment were generated with randomly located TODs and randomly selected infiltration start points. Accordingly, one thousand sets of vulnerable routes for infiltration purposes could be found, which could be accumulated and presented as an infiltration vulnerability map. This application can be further utilized for both optimal infiltration routing and surveillance network design. Indeed, dynamic simulation in the GIS environment is considered to be a powerful and practical solution for optimization problems. A similar approach can be applied to the dynamic optimal routing for civil infrastructure, which requires consideration of terrain-related constraints and cost functions.
Bang, Soonam; Heo, Joon; Han, Soohee; Sohn, Hong-Gyoo
2010-01-01
Infiltration-route analysis is a military application of geospatial information system (GIS) technology. In order to find susceptible routes, optimal-path-searching algorithms are applied to minimize the cost function, which is the summed result of detection probability. The cost function was determined according to the thermal observation device (TOD) detection probability, the viewshed analysis results, and two feature layers extracted from the vector product interim terrain data. The detection probability is computed and recorded for an individual cell (50 m × 50 m), and the optimal infiltration routes are determined with A* algorithm by minimizing the summed costs on the routes from a start point to an end point. In the present study, in order to simulate the dynamic nature of a real-world problem, one thousand cost surfaces in the GIS environment were generated with randomly located TODs and randomly selected infiltration start points. Accordingly, one thousand sets of vulnerable routes for infiltration purposes could be found, which could be accumulated and presented as an infiltration vulnerability map. This application can be further utilized for both optimal infiltration routing and surveillance network design. Indeed, dynamic simulation in the GIS environment is considered to be a powerful and practical solution for optimization problems. A similar approach can be applied to the dynamic optimal routing for civil infrastructure, which requires consideration of terrain-related constraints and cost functions. PMID:22315544
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-05-07
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances.
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-01-01
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112
Kim, Dae Wook; Kim, Sug-Whan; Burge, James H
2009-11-23
Optical surfaces can be accurately figured by computer controlled optical surfacing (CCOS) that uses well characterized sub-diameter polishing tools driven by numerically controlled (NC) machines. The motion of the polishing tool is optimized to vary the dwell time of the polisher on the workpiece according to the desired removal and the calibrated tool influence function (TIF). Operating CCOS with small and very well characterized TIF achieves excellent performance, but it takes a long time. This overall polishing time can be reduced by performing sequential polishing runs that start with large tools and finish with smaller tools. In this paper we present a variation of this technique that uses a set of different size TIFs, but the optimization is performed globally - i.e. simultaneously optimizing the dwell times and tool shapes for the entire set of polishing runs. So the actual polishing runs will be sequential, but the optimization is comprehensive. As the optimization is modified from the classical method to the comprehensive non-sequential algorithm, the performance improvement is significant. For representative polishing runs we show figuring efficiency improvement from approximately 88% to approximately 98% in terms of residual RMS (root-mean-square) surface error and from approximately 47% to approximately 89% in terms of residual RMS slope error.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Jabbari, Keyvan; Azarmahd, Nazli; Babazade, Shadi; Amouheidari, Alireza
2013-04-01
Radiotherapy plays an essential role in the management of breast cancer. Three-dimensional conformal radiation therapy (3D-CRT) is applied based on 3D image information of anatomy of patients. In 3D-CRT for breast cancer one of the common techniques is tangential technique. In this project, various parameters of tangential and supraclavicular fields are optimized. This project has been done on computed tomography images of 100 patients in Isfahan Milad Hospital. All patients have been simulated and all the important organs have been contoured by radiation oncologist. Two techniques in supraclavicular region are evaluated including: 1-A single field (Anterior Posterior [AP]) with a dose of 200 cGy per fraction with 6 MV energy. This is a common technique. 2-Two parallel opposed fields (AP-Posterior Anterior [PA]). The dose of AP was 150 cGy with 6 MV energy and PA 50 cGy with 18 MV. In the second part of the project, the tangential fields has been optimized with change of normalization point in five points: (1) Isocenter (Confluence of rotation gantry axis and collimator axis) (2) Middle of thickest part of breast or middle of inter field distance (IFD) (3) Border between the lung and chest wall (4) Physician's choice (5) Between IFD and isocenter. Dose distributions have been compared for all patients in different methods of supraclavicular and tangential field. In parallel opposed fields average lung dose was 4% more than a single field and the maximum received heart dose was 21.5% less than a single field. The average dose of planning tumor volume (PTV) in method 2 is 2% more than method 1. In general AP-PA method because of a better coverage of PTV is suggested. In optimization of the tangential field all methods have similar coverage of PTV. Each method has spatial advantages and disadvantages. If it is important for the physician to reduce the dose received by the lung and heart, fifth method is suggested since in this method average and maximum received dose
Controller design based on μ analysis and PSO algorithm.
Lari, Ali; Khosravi, Alireza; Rajabi, Farshad
2014-03-01
In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem.
Enabling a viable technique for the optimization of LNG carrier cargo operations
NASA Astrophysics Data System (ADS)
Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.
2016-09-01
In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.
NASA Astrophysics Data System (ADS)
Khusainov, R.; Klimchik, A.; Magid, E.
2017-01-01
The paper presents comparison analysis of two approaches in defining leg trajectories for biped locomotion. The first one operates only with kinematic limitations of leg joints and finds the maximum possible locomotion speed for given limits. The second approach defines leg trajectories from the dynamic stability point of view and utilizes ZMP criteria. We show that two methods give different trajectories and demonstrate that trajectories based on pure dynamic optimization cannot be realized due to joint limits. Kinematic optimization provides unstable solution which can be balanced by upper body movement.
Converting PSO dynamics into complex network - Initial study
NASA Astrophysics Data System (ADS)
Pluhacek, Michal; Janostik, Jakub; Senkerik, Roman; Zelinka, Ivan
2016-06-01
In this paper it is presented the initial study on the possibility of capturing the inner dynamic of Particle Swarm Optimization algorithm into a complex network structure. Inspired in previous works there are two different approaches for creating the complex network presented in this paper. Visualizations of the networks are presented and commented. The possibilities for future applications of the proposed design are given in detail.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Iversen, Ole-Jan; Lysvand, Hilde; Slupphaug, Geir
2017-01-01
Autoimmune diseases are characterized by chronic inflammatory reactions localized to an organ or organ-system. They are caused by loss of immunologic tolerance toward self-antigens, causing formation of autoantibodies that mistakenly attack their own body. Psoriasis is a chronic inflammatory autoimmune skin disease in which the underlying molecular mechanisms remain elusive. In this review, we present evidence accumulated through more than three decades that the serpin-derived protein Pso p27 is an autoantigen in psoriasis and probably also in other chronic inflammatory diseases. Pso p27 is derived from the serpin molecules SERPINB3 and SERPINB4 through non-canonical cleavage by mast cell chymase. In psoriasis, it is exclusively found in skin lesions and not in uninvolved skin. The serpins are cleaved into three fragments that remain associated as a Pso p27 complex with novel immunogenic properties and increased tendency to form large aggregates compared to native SERPINB3/B4. The amount of Pso p27 is directly correlated to disease activity, and through formation of complement activating immune-complexes, Pso p27 contribute to the inflammation in the skin lesions. SERPINB3/B4 are expressed in skin fibroblasts and keratinocytes, but normally absent in mast cells. Overexpression of the serpins may be induced by inflammation and hypoxia, resulting in mast cell uptake via yet unknown mechanisms. Here the generation and subsequent release of Pso p27 aggregates may promote an inflammatory loop that contributes to the chronicity of psoriasis and other autoimmune diseases.
NASA Astrophysics Data System (ADS)
Nath, Bikram; Mondal, Chandan Kumar
2014-08-01
We have designed and optimised a combined laser pulse using optimal control theory-based adaptive simulated annealing technique for selective vibrational excitations and photo-dissociation. Since proper choice of pulses for specific excitation and dissociation phenomena is very difficult, we have designed a linearly combined pulse for such processes and optimised the different parameters involved in those pulses so that we can get an efficient combined pulse. The technique makes us free from choosing any arbitrary type of pulses and makes a ground to check their suitability. We have also emphasised on how we can improve the performance of simulated annealing technique by introducing an adaptive step length of the different variables during the optimisation processes. We have also pointed out on how we can choose the initial temperature for the optimisation process by introducing heating/cooling step to reduce the annealing steps so that the method becomes cost effective.
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
Application of Numerical Optimization Technique to Design of Forward-Curved Blades Centrifugal Fan
NASA Astrophysics Data System (ADS)
Kim, Kwang-Yong; Seo, Seoung-Jin
This paper presents the response surface optimization method using three-dimensional Navier-Stokes analysis to optimize the shape of a forward-curved blades centrifugal fan. For numerical analysis, Reynolds-averaged Navier-Stokes equations with k-ɛ turbulence model are discretized with finite volume approximations. In order to reduce huge computing time due to a large number of blades in forward-curved blades centrifugal fan, the flow inside of the fan is regarded as steady flow by introducing the impeller force models. Three geometric variables, i.e., location of cut off, radius of cut off, and width of impeller, and one operating variable, i.e., flow rate, were selected as design variables. As a main result of the optimization, the efficiency was successfully improved. And, optimum design flow rate was found by using flow rate as one of design variables. It was found that the optimization process provides reliable design of this kind of fans with reasonable computing time.
NASA Technical Reports Server (NTRS)
Martini, William R.
1989-01-01
A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Gorman, A; Seabrook, G; Brakken, A; Dubois, M; Marn, C; Wilson, C; Jacobson, D; Liu, Y
2015-06-15
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6. They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
NASA Astrophysics Data System (ADS)
Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad
2008-04-01
To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology
Optimization of the tungsten oxide technique for measurement of atmospheric ammonia
NASA Technical Reports Server (NTRS)
Brown, Kenneth G.
1987-01-01
Hollow tubes coated with tungstic acid have been shown to be of value in the determination of ammonia and nitric acid in ambient air. Practical application of this technique was demonstrated utilizing an automated sampling system for in-flight collection and analysis of atmospheric samples. Due to time constraints these previous measurements were performed on tubes that had not been well characterized in the laboratory. As a result the experimental precision could not be accurately estimated. Since the technique was being compared to other techniques for measuring these compounds, it became necessary to perform laboratory tests which would establish the reliability of the technique. This report is a summary of these laboratory experiments as they are applied to the determination of ambient ammonia concentration.
Optimization of the Automated Spray Layer-by-Layer Technique for Thin Film Deposition
2010-06-01
SUPPLEMENTARY NOTES 14. ABSTRACT The operational parameters of the automated Spray- LbL technique for thin film deposition have been investigated in...order to-identify their effects on film thickness and roughness. We use the automated Spray- LbL system developed at MIT by the Hammond lab to build...This interdiffusion is investigated using both the conventional dipped LbL and Spray- LbL deposition techniques. Interdiffusion is shown to be dependent
Application of Optimization Techniques to Spectrally Modulated, Spectrally Encoded Waveform Design
2008-09-01
communication applications include television ( TV ), AM radio, FM radio, and early cellular telephones. Digital communication techniques differ from analog...fourth generation ( 4G ) communication systems based on Cognitive Radio (CR) and Software Defined Radio (SDR) techniques. As 4G SMSE communications...design that replaces previous trial and error methods. The research objective has been achieved in the sense that 4G communication design engineers now
NASA Astrophysics Data System (ADS)
Verma, Harish Kumar; Jain, Cheshta
2016-09-01
In this article, a hybrid algorithm of particle swarm optimization (PSO) with statistical parameter (HSPSO) is proposed. Basic PSO for shifted multimodal problems have low searching precision due to falling into a number of local minima. The proposed approach uses statistical characteristics to update the velocity of the particle to avoid local minima and help particles to search global optimum with improved convergence. The performance of the newly developed algorithm is verified using various standard multimodal, multivariable, shifted hybrid composition benchmark problems. Further, the comparative analysis of HSPSO with variants of PSO is tested to control frequency of hybrid renewable energy system which comprises solar system, wind system, diesel generator, aqua electrolyzer and ultra capacitor. A significant improvement in convergence characteristic of HSPSO algorithm over other variants of PSO is observed in solving benchmark optimization and renewable hybrid system problems.
Singular perturbation techniques for on-line optimal flight path control
NASA Technical Reports Server (NTRS)
Calise, A. J.
1979-01-01
This paper presents a partial evaluation on the use of singular perturbation methods for developing computer algorithms for on-line optimal control of aircraft. The evaluation is based on a study of the minimum time intercept problem using F-4 aerodynamic and propulsion data as a base line. The extensions over previous work on this subject are that aircraft turning dynamics (in addition to position and energy dynamics) are included in the analysis, the algorithm is developed for a moving end point and is adaptive to unpredictable target maneuvers, and short range maneuvers that do not have a cruise leg are included. Particular attention is given to identifying those quantities that can be precomputed and stored (as a function of aircraft total energy), thus greatly reducing the onboard computational load. Numerical results are given that illustrate the nature of the optimal intercept flight paths, and an estimate is given for the execution time and storage requirements of the control algorithm.
PID Controller Design Based on Global Optimization Technique with Additional Constraints
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Docekal, Tomas
2016-05-01
This paper deals with design of PID controller with the use of methods of global optimization implemented in Matlab environment and Optimization Toolbox. It is based on minimization of a chosen integral criterion with respect to additional requirements on control quality such as overshoot, phase margin and limits for manipulated value. The objective function also respects user-defined weigh coefficients for its particular terms for a different penalization of individual requirements that often clash each other such as for example overshoot and phase margin. The described solution is designated for continuous linear time-invariant static systems up to 4th order and thus efficient for the most of real control processes in practice.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
NASA Technical Reports Server (NTRS)
Levy, R.; Chai, K.
1978-01-01
A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Green, Lawrence L.
1999-01-01
A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the design of complex engineering systems and subsystems that coherently exploits the synergism of mutually interacting phenomena. As evidenced by the papers, which appear in the biannual AIAA/USAF/NASA/ISSMO Symposia on Multidisciplinary Analysis and Optimization, the MDO technical community focuses on vehicle and system design issues. This paper provides an overview of the MDO technology field from a fluid dynamics perspective, giving emphasis to suggestions of specific applications of recent MDO technologies that can enhance fluid dynamics research itself across the spectrum, from basic flow physics to full configuration aerodynamics.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Taneja, Sakshi; Shilpi, Satish; Khatri, Kapil
2016-05-01
Efavirenz is a non-nucleoside reverse transcriptase inhibitor, and is classiﬁed as BCS Class II API. Its erratic oral absorption and poor bioavailability make it a potential candidate for being formulated as a nanosuspension. The objective of this study was to formulate efavirenz nanosuspensions employing the antisolvent precipitation-ultrasonication method, and to enhance its solubility by reducing particle size to the nanometer range. The effects of different process parameters were studied and optimized with respect to particle size and poly dispersity index (PDI). The optimized formulation was also subjected to lyophilization, to further increase the solubility and stability, and the technology is potentially suited to a range of poorly water-soluble compounds.
Research of Arc Chamber Optimization Techniques Based on Flow Field and Arc Joint Simulation
NASA Astrophysics Data System (ADS)
Zhong, Jianying; Guo, Yujing; Zhang, Hao
2016-03-01
The preliminary design of an arc chamber in the 550 kV SF6 circuit breaker was proposed in accordance with the technical requirements and design experience. The structural optimization was carried out according to the no-load flow field simulation results and verified by no-load pressure measurement. Based on load simulation results such as temperature field variation at the arc area and the tendency of post arc current under different recovery voltage, the second optimal design was completed and its correctness was certificated by a breaking test. Results demonstrate that the interrupting capacity of an arc chamber can be evaluated by the comparison of the gas medium recovery speed and post arc current growth rate.
Shuttle ascent trajectory optimization with function space quasi-Newton techniques
NASA Technical Reports Server (NTRS)
Edge, E. R.; Powers, W. F.
1974-01-01
A Space Shuttle ascent trajectory optimization problem from lift-off to orbital insertion is solved with a function space version of a quasi-Newton parameter optimization method developed by Broyden. The problem includes five parameter and one bounded-function controls, two state-variable constraints, and four terminal conditions. The bounded controls are treated directly, while the remaining constraints are adjoined to the performance index (maximum payload) with penalty functions. The problem is formulated as a four-phase variational problem (liftoff, pitch-over, gravity-turn, linear tangent steering), and the appropriate gradients are developed by first variation theory. A projection operator is introduced to aid in the interpretation of the algorithm with mixed parameter and function controls.
Mackey-Glass noisy chaotic time series prediction by a swarm-optimized neural network
NASA Astrophysics Data System (ADS)
López-Caraballo, C. H.; Salfate, I.; Lazzús, J. A.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2016-05-01
In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass noiseless chaotic time series in the short-term and long-term prediction. The performance prediction is evaluated and compared with similar work in the literature, particularly for the long-term forecast. Also, we present properties of the dynamical system via the study of chaotic behaviour obtained from the time series prediction. Then, this standard hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions that also allowed us compute uncertainties of predictions for noisy Mackey-Glass chaotic time series. We study the impact of noise for three cases with a white noise level (σ N ) contribution of 0.01, 0.05 and 0.1.
NASA Astrophysics Data System (ADS)
Feijóo, Gonzalo R.; Tirapu-Azpiroz, Jaione; Rosenbluth, Alan E.; Oberai, Assad A.; Jagalur Mohan, Jayanth; Tian, Kehan; Melville, David; Gil, Dario; Lai, Kafai
2009-03-01
Near-field interference lithography is a promising variant of multiple patterning in semiconductor device fabrication that can potentially extend lithographic resolution beyond the current materials-based restrictions on the Rayleigh resolution of projection systems. With H2O as the immersion medium, non-evanescent propagation and optical design margins limit achievable pitch to approximately 0.53λ/nH2O = 0.37λ. Non-evanescent images are constrained only by the comparatively large resist indices (typically1.7) to a pitch resolution of 0.5/nresist (typically 0.29). Near-field patterning can potentially exploit evanescent waves and thus achieve higher spatial resolutions. Customized near-field images can be achieved through the modulation of an incoming wavefront by what is essentially an in-situ hologram that has been formed in an upper layer during an initial patterned exposure. Contrast Enhancement Layer (CEL) techniques and Talbot near-field interferometry can be considered special cases of this approach. Since the technique relies on near-field interference effects to produce the required pattern on the resist, the shape of the grating and the design of the film stack play a significant role on the outcome. As a result, it is necessary to resort to full diffraction computations to properly simulate and optimize this process. The next logical advance for this technology is to systematically design the hologram and the incident wavefront which is generated from a reduction mask. This task is naturally posed as an optimization problem, where the goal is to find the set of geometric and incident wavefront parameters that yields the closest fit to a desired pattern in the resist. As the pattern becomes more complex, the number of design parameters grows, and the computational problem becomes intractable (particularly in three-dimensions) without the use of advanced numerical techniques. To treat this problem effectively, specialized numerical methods have been
SVD-Based Optimal Filtering Technique for Noise Reduction in Hearing Aids Using Two Microphones
NASA Astrophysics Data System (ADS)
Maj, Jean-Baptiste; Moonen, Marc; Wouters, Jan
2002-12-01
We introduce a new SVD-based (Singular value decomposition) strategy for noise reduction in hearing aids. This technique is evaluated for noise reduction in a behind-the-ear (BTE) hearing aid where two omnidirectional microphones are mounted in an endfire configuration. The behaviour of the SVD-based technique is compared to a two-stage adaptive beamformer for hearing aids developed by Vanden Berghe and Wouters (1998). The evaluation and comparison is done with a performance metric based on the speech intelligibility index (SII). The speech and noise signals are recorded in reverberant conditions with a signal-to-noise ratio of [InlineEquation not available: see fulltext.] and the spectrum of the noise signals is similar to the spectrum of the speech signal. The SVD-based technique works without initialization nor assumptions about a look direction, unlike the two-stage adaptive beamformer. Still, for different noise scenarios, the SVD-based technique performs as well as the two-stage adaptive beamformer, for a similar filter length and adaptation time for the filter coefficients. In a diffuse noise scenario, the SVD-based technique performs better than the two-stage adaptive beamformer and hence provides a more flexible and robust solution under speaker position variations and reverberant conditions.
Merwin, S.E.; Martin, J.B.; Selby, J.M.; Vallario, E.J.
1986-01-01
Six numerical examples of optimization of radiation protection are provided in the appendices of ICRP Publication 37. In each case, the calculations are based on fairly well defined parameters and assumptions that were well understood. In this paper, we have examined two numerical examples that are based on empirical data and less certain assumptions. These examples may represent typical applications of optimization principles to the evaluation of specific elements of a radiation protection program. In the first example, the optimum bioassay frequency for tritium workers was found to be once every 95 days, which compared well with ICRP Publication 10 recommendations. However, this result depended heavily on the assumption that the value of a potential undetected rem was US $1000. The second example showed that the optimum frequency for recalibrating Cutie Pie (CP) type ionization chamber survey instruments was once every 102 days, which compared well with the Hanford standard frequency of once every 90 days. This result depended largely on the assumption that an improperly operating CP instrument could lead to a serious overexposure. These examples have led us to conclude that optimization of radiation protection programs must be a very dynamic process. Examples must be recalculated as empirical data expand and improve and as the uncertainties surrounding assumptions are reduced.
NASA Astrophysics Data System (ADS)
Triplett, Michael D.; Rathman, James F.
2009-04-01
Using statistical experimental design methodologies, the solid lipid nanoparticle design space was found to be more robust than previously shown in literature. Formulation and high shear homogenization process effects on solid lipid nanoparticle size distribution, stability, drug loading, and drug release have been investigated. Experimentation indicated stearic acid as the optimal lipid, sodium taurocholate as the optimal cosurfactant, an optimum lecithin to sodium taurocholate ratio of 3:1, and an inverse relationship between mixing time and speed and nanoparticle size and polydispersity. Having defined the base solid lipid nanoparticle system, β-carotene was incorporated into stearic acid nanoparticles to investigate the effects of introducing a drug into the base solid lipid nanoparticle system. The presence of β-carotene produced a significant effect on the optimal formulation and process conditions, but the design space was found to be robust enough to accommodate the drug. β-Carotene entrapment efficiency averaged 40%. β-Carotene was retained in the nanoparticles for 1 month. As demonstrated herein, solid lipid nanoparticle technology can be sufficiently robust from a design standpoint to become commercially viable.
Tai, Guangfu; Williams, Peter
2012-11-01
This paper re-visits the question of mapping a probability distribution to patient unpunctuality in appointment-driven outpatient clinics, with reference to published empirical arrival data. This data indicates the possibility of interesting aberrations such as local modes and near-modes, asymmetry and peakedness. We examine the form of some published data on patient unpunctuality, and propose a mixed distribution which we call "F3" to provide a richer representation of shape such as in the shoulders of the distribution. The adequacy of this model is assessed in a worked example referencing a classical study, where a comparison is made of F3 against the normal and Pearson VII distributions with reference to summary statistics, graphical probability plots (P-P and Q-Q), a range of goodness of fitness criteria. Under this patient arrival setting, 2P method is proposed for optimal patient interval setting to minimize waiting time of both patient and the doctor and this 2P method is validated with a tentative simulation example. This study argues that frequency distribution of patient unpunctuality shows asymmetry in shape which is resulted from various types of arrival behaviours. Consequently optimal appointment intervals of scheduled patients, which minimize the total waiting time of patients and the doctor is highly related to patient unpunctuality patterns and this makes the optimal appointment intervals for various patient unpunctualities predictable.
A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers
NASA Astrophysics Data System (ADS)
Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.
Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
NASA Astrophysics Data System (ADS)
Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.
2013-11-01
Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method.
Candiani, Gabriele; Carnevale, Claudio; Finzi, Giovanna; Pisoni, Enrico; Volta, Marialuisa
2013-08-01
To fulfill the requirements of the 2008/50 Directive, which allows member states and regional authorities to use a combination of measurement and modeling to monitor air pollution concentration, a key approach to be properly developed and tested is the data assimilation one. In this paper, with a focus on regional domains, a comparison between optimal interpolation and Ensemble Kalman Filter is shown, to stress pros and drawbacks of the two techniques. These approaches can be used to implement a more accurate monitoring of the long-term pollution trends on a geographical domain, through an optimal combination of all the available sources of data. The two approaches are formalized and applied for a regional domain located in Northern Italy, where the PM10 level which is often higher than EU standard limits is measured.
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
2012-01-01
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R(2) = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.
Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A
2016-04-12
The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired.
Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.
ERIC Educational Resources Information Center
Bose, Anindya
The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…
Technology Transfer Automated Retrieval System (TEKTRAN)
This study evaluated the impact of gas concentration and wind sensor locations on the accuracy of the backward Lagrangian stochastic inverse-dispersion technique (bLS) for measuring gas emission rates from a typical lagoon environment. Path-integrated concentrations (PICs) and 3-dimensional (3D) wi...
Toward a systematic design theory for silicon solar cells using optimization techniques
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
This work is a first detailed attempt to systematize the design of silicon solar cells. Design principles follow from three theorems. Although the results hold only under low injection conditions in base and emitter regions, they hold for arbitrary doping profiles and include the effects of drift fields, high/low junctions and heavy doping concentrations of donor or acceptor atoms. Several optimal designs are derived from the theorems, one of which involves a three-dimensional morphology in the emitter region. The theorems are derived from a nonlinear differential equation of the Riccati form, the dependent variable of which is a normalized recombination particle current.
NASA Astrophysics Data System (ADS)
Nasshorudin, Dalila; Ahmad, Muhammad Syarhabil; Mamat, Awang Soh; Rosli, Suraya
2015-05-01
Solventless extraction process of Chromalaena odorata using reduced pressure and temperature has been investigated. The percentage yield of essential oil produce was calculated for every experiment with different experimental condition. The effect of different parameters, such as temperature and extraction time on the yield was investigated using the Response Surface Methodology (RSM) through Central Composite Design (CCD). The temperature and extraction time were found to have significant effect on the yield of extract. A final essential oil yield was 0.095% could be extracted under the following optimized conditions; a temperature of 80 °C and a time of 8 hours.
Application of optimization techniques to spacecaft fuel usage minimization in deep space navigation
NASA Technical Reports Server (NTRS)
Wang, Tseng-Chan; Sunseri, Richard F.; Stanford, Richard H.; Gray, Donald L.; Breckheimer, Peter J.
1987-01-01
Mathematical analysis of the minimization of spacecraft fuel usage for both impulsive and finite motor burns is presented. A high precision integrated trajectory search program (SEPV) and several optimization software libraries are used to solve minimum fuel usage problems. The SEPV program has the capacity to vary either the initial spacecraft state or the finite burn parameters to acquire a specified set of target values. Several test examples for the Voyager 2 Uranus Encounter and the Galileo Jupiter Orbiter are presented to show that spacecraft fuel consumption can be minimized in targeting maneuver strategies. The fuel savings achieved by the optimum solution can be significant.
NASA Astrophysics Data System (ADS)
Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang
2011-10-01
The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality.
Li, Zhiqi; Zhou, Wei; Zhou, Hui; Zhang, Xueping; Zhao, Jie
2013-02-01
Step phase quantization regularity between different nominal frequency signals is introduced in this paper. Based on this regularity, an optimized high resolution frequency measurement technique is presented. The key features and issues of phase quantization characteristics and measurements are described. Based on the relationship between the same or multiple nominal signals with a certain differences, the resolution of frequency measurements is developed and the range is widened. Several measurement results are provided to support the concepts with experimental evidence. The resolution of frequency measurement can reach 10(-12) (s(-1)) over a wide range or higher for specific frequency signals.
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1990-01-01
A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
Optimal performance receiving ranging system model and realization using TCSPC technique
NASA Astrophysics Data System (ADS)
Shen, Shanshan; Chen, Qian; He, Weiji; Zhou, Pin; Gu, Guohua
2015-10-01
In this paper, the short dead time detection probability is introduced to the linear SNR model of fixed frequency multipulse accumulation detection system. Monte Carlo simulation is consistent with the theory simulation, which proves that with the increased laser power, the SNR first gets larger quickly and then becomes stable. Then the range standard deviation model is settled and firstly shows that larger dead time brings better range precision, which is consistent with the B. I. Cantor's research. Secondly, Monte Carlo simulation and theory simulation both indicate that with the increased laser power, range precision first enhances and then becomes stable. Experimental results show that based on 500000c/s high background noise, the maximum of SNR can be obtained with emitting laser power at about 400uw at 50ms integrated time. Range precision reaches the optimal level at 6mm. The experimental data show a precision which is always worse than the Monte Carlo simulated results. This arises from the fact that the histograms' jitter is not taking into account and introduced during simulation, whereas the experimental system has approximately 500ps' jitter. The system jitter causes larger time stamp value fluctuation, leading to worse range precision. To sum up, theory and experiment all prove that the optimal performance receiving of SNR and precision is achieved on this multi-pulse accumulation detection system.
NASA Astrophysics Data System (ADS)
Pande, K. P.; Martin, E.; Gutierrez, D.; Aina, O.
1987-03-01
An optimal annealing process was developed for sintering AuGe ohmic contacts to ion-implanted semi-insulating InP substrates. Contacts were annealed using a standard furnace, graphite strip heater and a lamp annealer. Alloying at 375°C for 3 min was found to be most suitable for achieving good contact morphology and lowest contact resistivity. Of the three techniques, the lamp annealing technique was found to give the best results when contacts were annealed under a SiO 2 cap. Contact resistivity as low as 8 × 10 -6 cm 2 was obtained for ion-implanted n+ layers in semi-insulating InP.
NASA Astrophysics Data System (ADS)
Erfanifard, Yousef; Stereńczak, Krzysztof; Behnia, Negin
2014-01-01
Estimating the optimal parameters of some classification techniques becomes their negative aspect as it affects their performance for a given dataset and reduces classification accuracy. It was aimed to optimize the combination of effective parameters of support vector machine (SVM), artificial neural network (ANN), and object-based image analysis (OBIA) classification techniques by the Taguchi method. The optimized techniques were applied to delineate crowns of Persian oak coppice trees on UltraCam-D very high spatial resolution aerial imagery in Zagros semiarid woodlands, Iran. The imagery was classified and the maps were assessed by receiver operating characteristic curve and other performance metrics. The results showed that Taguchi is a robust approach to optimize the combination of effective parameters in these image classification techniques. The area under curve (AUC) showed that the optimized OBIA could well discriminate tree crowns on the imagery (AUC=0.897), while SVM and ANN yielded slightly less AUC performances of 0.819 and 0.850, respectively. The indices of accuracy (0.999) and precision (0.999) and performance metrics of specificity (0.999) and sensitivity (0.999) in the optimized OBIA were higher than with other techniques. The optimization of effective parameters of image classification techniques by the Taguchi method, thus, provided encouraging results to discriminate the crowns of Persian oak coppice trees on UltraCam-D aerial imagery in Zagros semiarid woodlands.
Time-domain technique for optimal design of digital-filter equalizers.
NASA Technical Reports Server (NTRS)
Burlage, D. W.; Houts, R. C.
1972-01-01
A technique is presented for the design of frequency-sampling and transversal digital filters from specified unit-impulse responses. The multiplier coefficients for the digital filter are specified by the use of a linear-programming algorithm. Examples include the design of digital filters to generate intersymbol-free pulses for data transmission over ideal bandlimited channels and to equalize data transmission channels that have known unit-impulse responses.
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Optimization of non-oxidative carbon-removal techniques by nitrogen-containing plasmas
NASA Astrophysics Data System (ADS)
Ferreira, J. A.; Tabarés, F. L.; Tafalla, D.
2009-06-01
The continuous control of tritium inventory in ITER calls for the development of new conditioning techniques [G. Federici et al., Nucl. Fus. 41 (2001) 1967]. For carbon plasma-facing components, this implies the removal of the T-rich carbon co-deposits. In the presence of strong oxygen getters, such Be, the use of oxygen-based techniques will be discouraged. In addition, tritiated water generated by these techniques poses extra problems in terms of safety issues [G. Saji, Fus. Eng. Des. 69 (2003) 631; G. Bellanger, J.J. Rameau, Fus. Technol. 32 (1997) 196; T. Hayashi, et al., Fus. Eng. Des. 81 (2006) 1365]. In the present work, oxygen-free (nitrogen and ammonia) glow discharge plasmas for carbon film removal were investigated. The following gas mixtures were fed into a DC glow discharge running in a ˜200 nm carbon film coated chamber. Erosion rate was measured in situ by laser interferometry, RGA (Residual Gas Analysis) and CTAMS (Cryotrapping Assisted Mass Spectrometry) [J.A. Ferreira, F.L. Tabarés, J. Vac. Sci. Technol. A25(2) (2007) 246] were used for the characterization of the reaction products. Very high erosion rates (similar to those obtained in helium-oxygen glow discharge [J.A. Ferreira et al., J. Nucl. Mater. 363-365 (2007) 252]) were recorded for ammonia glow discharge.
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through