Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.
Elhossini, Ahmed; Areibi, Shawki; Dony, Robert
2010-01-01
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.
2013-09-01
In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2016-06-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed (N), feed rate (f) and depth of cut (d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3-5-1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.
Liu, Xiaoyong; Fu, Hui
2014-01-01
Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms. PMID:24971382
PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.
Liu, Xiaoyong; Fu, Hui
2014-01-01
Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.
NASA Astrophysics Data System (ADS)
Handayani, D.; Nuraini, N.; Tse, O.; Saragih, R.; Naiborhu, J.
2016-04-01
PSO is a computational optimization method motivated by the social behavior of organisms like bird flocking, fish schooling and human social relations. PSO is one of the most important swarm intelligence algorithms. In this study, we analyze the convergence of PSO when it is applied to with-in host dengue infection treatment model simulation in our early research. We used PSO method to construct the initial adjoin equation and to solve a control problem. Its properties of control input on the continuity of objective function and ability of adapting to the dynamic environment made us have to analyze the convergence of PSO. With the convergence analysis of PSO we will have some parameters that ensure the convergence result of numerical simulations on this model using PSO.
An Effective Optimization Method for Initial Wavelength Calibration of LAMOST Based on PSO
NASA Astrophysics Data System (ADS)
Wang, S.; Zhu, Z. Q.; Zhu, J.; Ye, G. H.; Ye, Z. F.
2011-09-01
The initial wavelength calibration procedure of Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) consists of three steps. Firstly, for each certain point in the search space near the prior calibration coefficients, its corresponding simulation arc spectrum could be obtained with the interpolation method. Then, the cross correlation between the simulation arc spectrum and the observed one will be calculated. Finally, the result of initial wavelength calibration is the calibration coefficient corresponding to the maximum correlation coefficient. Thus, multi-parameter optimization problem is essential in the calibration procedure. Particle swarm optimization (PSO) is a stochastic global optimization algorithm that is based on swarm intelligence. It has the advantages of easy to implement, high accuracy and fast convergence. Considering the excellent performance of PSO, we propose an optimization method for initial wavelength calibration of LAMOST based on PSO, and design the corresponding algorithm and the initial wavelength calibration test experiments. The experimental results show that the proposed PSO-based algorithm outperforms the improved genetic algorithm in terms of convergence speed, solution quality and CPU time. Therefore, the proposed method is a more effective method for initial wavelength calibration.
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
NASA Astrophysics Data System (ADS)
Rambabu, C.; Obulesu, Y. P.; Saibabu, Ch.
2014-07-01
This work presents particle swarm optimization (PSO) based method to solve the optimal power flow in power systems incorporating flexible AC transmission systems controllers such as thyristor controlled phase shifter, thyristor controlled series compensator and unified power flow controller for security enhancement under single network contingencies. A fuzzy contingency ranking method is used in this paper and observed that it effectively eliminates the masking effect when compared with other methods of contingency ranking. The fuzzy based network composite overall severity index is used as an objective to be minimized to improve the security of the power system. The proposed optimization process with PSO is presented with case study example using IEEE 30-bus test system to demonstrate its applicability. The results are presented to show the feasibility and potential of this new approach.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective. PMID:25152929
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network
Vimalarani, C.; Subramanian, R.; Sivanandam, S. N.
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption. PMID:26881273
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption. PMID:26881273
Prediction of O-glycosylation Sites Using Random Forest and GA-Tuned PSO Technique
Hassan, Hebatallah; Badr, Amr; Abdelhalim, MB
2015-01-01
O-glycosylation is one of the main types of the mammalian protein glycosylation; it occurs on the particular site of serine (S) or threonine (T). Several O-glycosylation site predictors have been developed. However, a need to get even better prediction tools remains. One challenge in training the classifiers is that the available datasets are highly imbalanced, which makes the classification accuracy for the minority class to become unsatisfactory. In our previous work, we have proposed a new classification approach, which is based on particle swarm optimization (PSO) and random forest (RF); this approach has considered the imbalanced dataset problem. The PSO parameters setting in the training process impacts the classification accuracy. Thus, in this paper, we perform parameters optimization for the PSO algorithm, based on genetic algorithm, in order to increase the classification accuracy. Our proposed genetic algorithm-based approach has shown better performance in terms of area under the receiver operating characteristic curve against existing predictors. In addition, we implemented a glycosylation predictor tool based on that approach, and we demonstrated that this tool could successfully identify candidate glycosylation sites in case study protein. PMID:26244014
Prediction of O-glycosylation Sites Using Random Forest and GA-Tuned PSO Technique.
Hassan, Hebatallah; Badr, Amr; Abdelhalim, M B
2015-01-01
O-glycosylation is one of the main types of the mammalian protein glycosylation; it occurs on the particular site of serine (S) or threonine (T). Several O-glycosylation site predictors have been developed. However, a need to get even better prediction tools remains. One challenge in training the classifiers is that the available datasets are highly imbalanced, which makes the classification accuracy for the minority class to become unsatisfactory. In our previous work, we have proposed a new classification approach, which is based on particle swarm optimization (PSO) and random forest (RF); this approach has considered the imbalanced dataset problem. The PSO parameters setting in the training process impacts the classification accuracy. Thus, in this paper, we perform parameters optimization for the PSO algorithm, based on genetic algorithm, in order to increase the classification accuracy. Our proposed genetic algorithm-based approach has shown better performance in terms of area under the receiver operating characteristic curve against existing predictors. In addition, we implemented a glycosylation predictor tool based on that approach, and we demonstrated that this tool could successfully identify candidate glycosylation sites in case study protein.
Trajectory planning of free-floating space robot using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Walter, Ulrich
2015-07-01
This paper investigates the application of Particle Swarm Optimization (PSO) strategy to trajectory planning of the kinematically redundant space robot in free-floating mode. Due to the path dependent dynamic singularities, the volume of available workspace of the space robot is limited and enormous joint velocities are required when such singularities are met. In order to overcome this effect, the direct kinematics equations in conjunction with PSO are employed for trajectory planning of free-floating space robot. The joint trajectories are parametrized with the Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) redundant manipulator mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
The Optimal Operation of Multi-reservoir Floodwater Resources Control Based on GA-PSO
NASA Astrophysics Data System (ADS)
Huang, X.; Zhu, X.; Lian, Y.; Fang, G.; Zhu, L.
2015-12-01
Floodwater resources control operation has an important role to reduce flood disaster, ease the contradiction between water supply and demand and improve flood resource utilization. Based on the basin safety and floodwater resources utilization with the maximum benefit for floodwater optimal scheduling, the optimal operation of multi-reservoir floodwater resources control model is established. There are two objectives of floodwater resources control operation in multi-reservoir system. The first one is floodwater control safety, the other one is floodwater resource utilization with the maximum benefit. For the floodwater control safety target, the maximal flood peak reduction criterion is selected as the objective function. The maximal flood peak reduction criterion means that choosing reducing most peak flow as the judgment standard of the flood control operations optimal solution. For the floodwater resource utilization, maximum benefit of floodwater utilization refers to make full use of multi-reservoir capacity, accumulate transit flood as much as possible. In the other word, it refers to take releasing water as least as possible as the target in the case of determining the flood process. The model is solved by the coupling optimal method of genetic algorithm and particle swarm optimization (GA-PSO). GA-PSO uses the mutation for reference and takes PSO as a template, introduces the crossover and mutation into the search process of PSO in order to improve the search capabilities of particles. In order to make the particles have the characteristics of the current global best solution, crossover and mutation are used in the updated particles. Taking Shilianghe reservoir and Anfengshan reservoir in Jiangsu Province, China, for an case study, the results show that the optimal operation will reduce the floodwater resources control pressure, as well as keep nearly 81.11 million cubic meters floodwater resources accumulating in Longlianghe river and Anfengshan
NASA Astrophysics Data System (ADS)
Xu, Hongbo; Chen, Guohua
2013-02-01
This paper presents an intelligent fault identification method of rolling bearings based on least squares support vector machine optimized by improved particle swarm optimization (IPSO-LSSVM). The method adopts a modified PSO algorithm to optimize the parameters of LSSVM, and then the optimized model could be established to identify the different fault patterns of rolling bearings. Firstly, original fault vibration signals are decomposed into some stationary intrinsic mode functions (IMFs) by empirical mode decomposition (EMD) method and the energy feature indexes extraction based on IMF energy entropy is analyzed in detail. Secondly, the extracted energy indexes serve as the fault feature vectors to be input to the IPSO-LSSVM classifier for identifying different fault patterns. Finally, a case study on rolling bearing fault identification demonstrates that the method can effectively enhance identification accuracy and convergence rate.
Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes
NASA Astrophysics Data System (ADS)
Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.
2016-03-01
One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.
Improving Cooperative PSO using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Afsahi, Zahra; Meybodi, Mohammadreza
PSO is a population-based technique for optimization, which simulates the social behaviour of the fish schooling or bird flocking. Two significant weaknesses of this method are: first, falling into local optimum and second, the curse of dimensionality. In this work we present the FCPSO-H to overcome these weaknesses. Our approach was implemented in the cooperative PSO, which employs fuzzy logic to control the acceleration coefficients in velocity equation of each particle. The proposed approach is validated by function optimization problem form the standard literature simulation result indicates that the approach is highly competitive specifically in its better general convergence performance.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
NASA Astrophysics Data System (ADS)
Zhang, Enlai; Hou, Liang; Shen, Chao; Shi, Yingliang; Zhang, Yaxiang
2016-01-01
To better solve the complex non-linear problem between the subjective sound quality evaluation results and objective psychoacoustics parameters, a method for the prediction of the sound quality is put forward by using a back propagation neural network (BPNN) based on particle swarm optimization (PSO), which is optimizing the initial weights and thresholds of BP network neurons through the PSO. In order to verify the effectiveness and accuracy of this approach, the noise signals of the B-Class vehicles from the idle speed to 120 km h-1 measured by the artificial head, are taken as a target. In addition, this paper describes a subjective evaluation experiment on the sound quality annoyance inside the vehicles through a grade evaluation method, by which the annoyance of each sample is obtained. With the use of Artemis software, the main objective psychoacoustic parameters of each noise sample are calculated. These parameters include loudness, sharpness, roughness, fluctuation, tonality, articulation index (AI) and A-weighted sound pressure level. Furthermore, three evaluation models with the same artificial neural network (ANN) structure are built: the standard BPNN model, the genetic algorithm-back-propagation neural network (GA-BPNN) model and the PSO-back-propagation neural network (PSO-BPNN) model. After the network training and the evaluation prediction on the three models’ network based on experimental data, it proves that the PSO-BPNN method can achieve convergence more quickly and improve the prediction accuracy of sound quality, which can further lay a foundation for the control of the sound quality inside vehicles.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm
NASA Astrophysics Data System (ADS)
Yang, Yahong; Zhao, Fuqing; Hong, Yi; Yu, Dongmei
2005-12-01
Integration of process planning with scheduling by considering the manufacturing system's capacity, cost and capacity in its workshop is a critical issue. The concurrency between them can also eliminate the redundant process and optimize the entire production cycle, but most integrated process planning and scheduling methods only consider the time aspects of the alternative machines when constructing schedules. In this paper, a fuzzy inference system (FIS) in choosing alternative machines for integrated process planning and scheduling of a job shop manufacturing system is presented. Instead of choosing alternative machines randomly, machines are being selected based on the machines reliability. The mean time to failure (MTF) values is input in a fuzzy inference mechanism, which outputs the machine reliability. The machine is then being penalized based on the fuzzy output. The most reliable machine will have the higher priority to be chosen. In order to overcome the problem of un-utilization machines, sometimes faced by unreliable machine, the particle swarm optimization (PSO) have been used to balance the load for all the machines. Simulation study shows that the system can be used as an alternative way of choosing machines in integrated process planning and scheduling.
NASA Astrophysics Data System (ADS)
Astuty; Haryono, T.
2016-04-01
Transmission expansion planning (TEP) is one of the issue that have to be faced caused by addition of large scale power generation into the existing power system. Optimization need to be conducted to get optimal solution technically and economically. Several mathematic methods have been applied to provide optimal allocation of new transmission line such us genetic algorithm, particle swarm optimization and tabu search. This paper proposed novel binary particle swarm optimization (NBPSO) to determine which transmission line should be added to the existing power system. There are two scenerios in this simulation. First, considering transmission power losses and the second is regardless transmission power losses. NBPSO method successfully obtain optimal solution in short computation time. Compare to the first scenario, the number of new line in second scenario which regardless power losses is less but produces high power losses that cause the cost becoming extremely expensive.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer.
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods. PMID:27370427
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
NASA Astrophysics Data System (ADS)
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.
Utilization of PSO algorithm in estimation of water level change of Lake Beysehir
NASA Astrophysics Data System (ADS)
Buyukyildiz, Meral; Tezel, Gulay
2015-12-01
In this study, unlike backpropagation algorithm which gets local best solutions, the usefulness of particle swarm optimization (PSO) algorithm, a population-based optimization technique with a global search feature, inspired by the behavior of bird flocks, in determination of parameters of support vector machines (SVM) and adaptive network-based fuzzy inference system (ANFIS) methods was investigated. For this purpose, the performances of hybrid PSO-ɛ support vector regression (PSO-ɛSVR) and PSO-ANFIS models were studied to estimate water level change of Lake Beysehir in Turkey. The change in water level was also estimated using generalized regression neural network (GRNN) method, an iterative training procedure. Root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R 2) were used to compare the obtained results. Efforts were made to estimate water level change (L) using different input combinations of monthly inflow-lost flow (I), precipitation (P), evaporation (E), and outflow (O). According to the obtained results, the other methods except PSO-ANN generally showed significantly similar performances to each other. PSO-ɛSVR method with the values of minMAE = 0.0052 m, maxMAE = 0.04 m, and medianMAE = 0.0198 m; minRMSE = 0.0070 m, maxRMSE = 0.0518 m, and medianRMSE = 0.0241 m; minR 2 = 0.9169, maxR 2 = 0.9995, medianR 2 = 0.9909 for the I-P-E-O combination in testing period became superior in forecasting water level change of Lake Beysehir than the other methods. PSO-ANN models were the least successful models in all combinations.
NASA Astrophysics Data System (ADS)
Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.
2010-05-01
PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Early Mission Design of Transfers to Halo Orbits via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.; Spencer, David B.; Hart, Terry J.
2016-06-01
Particle Swarm Optimization (PSO) is used to prune the search space of a low-thrust trajectory transfer from a high-altitude, Earth orbit to a Lagrange point orbit in the Earth-Moon system. Unlike a gradient based approach, this evolutionary PSO algorithm is capable of avoiding undesirable local minima. The PSO method is extended to a "local" version and uses a two dimensional search space that is capable of reducing the computation run-time by an order of magnitude when compared with published work. A technique for choosing appropriate PSO parameters is demonstrated and an example of an optimized trajectory is discussed.
Goodarzi, Mohammad; Saeys, Wouter; Deeb, Omar; Pieters, Sigrid; Vander Heyden, Yvan
2013-12-01
Quantitative structure-activity relationship (QSAR) modeling was performed for imidazo[1,5-a]pyrido[3,2-e]pyrazines, which constitute a class of phosphodiesterase 10A inhibitors. Particle swarm optimization (PSO) and genetic algorithm (GA) were used as feature selection techniques to find the most reliable molecular descriptors from a large pool. Modeling of the relationship between the selected descriptors and the pIC50 activity data was achieved by linear [multiple linear regression (MLR)] and non-linear [locally weighted regression (LWR) based on both Euclidean (E) and Mahalanobis (M) distances] methods. In addition, a stepwise MLR model was built using only a limited number of quantum chemical descriptors, selected because of their correlation with the pIC50 . The model was not found interesting. It was concluded that the LWR model, based on the Euclidean distance, applied on the descriptors selected by PSO has the best prediction ability. However, some other models behaved similarly. The root-mean-squared errors of prediction (RMSEP) for the test sets obtained by PSO/MLR, GA/MLR, PSO/LWRE, PSO/LWRM, GA/LWRE, and GA/LWRM models were 0.333, 0.394, 0.313, 0.333, 0.421, and 0.424, respectively. The PSO-selected descriptors resulted in the best prediction models, both linear and non-linear.
Particle Swarm Optimization with Dynamic Step Length
NASA Astrophysics Data System (ADS)
Cui, Zhihua; Cai, Xingjuan; Zeng, Jianchao; Sun, Guoji
Particle swarm optimization (PSO) is a robust swarm intelligent technique inspired from birds flocking and fish schooling. Though many effective improvements have been proposed, however, the premature convergence is still its main problem. Because each particle's movement is a continuous process and can be modelled with differential equation groups, a new variant, particle swarm optimization with dynamic step length (PSO-DSL), with additional control coefficient- step length, is introduced. Then the absolute stability theory is introduced to analyze the stability character of the standard PSO, the theoretical result indicates the PSO with constant step length can not always be stable, this may be one of the reason for premature convergence. Simulation results show the PSO-DSL is effective.
NASA Astrophysics Data System (ADS)
Sameen, Maher Ibrahim; Pradhan, Biswajeet
2016-06-01
In this study, we propose a novel built-up spectral index which was developed by using particle-swarm-optimization (PSO) technique for Worldview-2 images. PSO was used to select the relevant bands from the eight (8) spectral bands of Worldview-2 image and then were used for index development. Multiobiective optimization was used to minimize the number of selected spectral bands and to maximize the classification accuracy. The results showed that the most important and relevant spectral bands among the eight (8) bands for built-up area extraction are band4 (yellow) and band7 (NIR1). Using those relevant spectral bands, the final spectral index was form ulated by developing a normalized band ratio. The validation of the classification result using the proposed spectral index showed that our novel spectral index performs well compared to the existing WV -BI index. The accuracy assessment showed that the new proposed spectral index could extract built-up areas from Worldview-2 image with an area under curve (AUC) of (0.76) indicating the effectiveness of the developed spectral index. Further improvement could be done by using several datasets during the index development process to ensure the transferability of the index to other datasets and study areas.
Polynomial optimization techniques for activity scheduling. Optimization based prototype scheduler
NASA Technical Reports Server (NTRS)
Reddy, Surender
1991-01-01
Polynomial optimization techniques for activity scheduling (optimization based prototype scheduler) are presented in the form of the viewgraphs. The following subject areas are covered: agenda; need and viability of polynomial time techniques for SNC (Space Network Control); an intrinsic characteristic of SN scheduling problem; expected characteristics of the schedule; optimization based scheduling approach; single resource algorithms; decomposition of multiple resource problems; prototype capabilities, characteristics, and test results; computational characteristics; some features of prototyped algorithms; and some related GSFC references.
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed. PMID:25972896
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering.
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed. PMID:25972896
Laparoscopic pyelolithotomy: optimizing surgical technique.
Salvadó, José A; Guzmán, Sergio; Trucco, Cristian A; Parra, Claudio A
2009-04-01
The classic approach to renal stone disease includes shockwave lithotripsy, ureteroscopy or percutaneous nephrolithotripsy, and, in some cases, a combination of both. The usefulness of laparoscopy in this regard remains debated. In this report and video, we present our technique of laparoscopic pyelolithotomy assisted by flexible instrumentation to achieve maximal stone clearance in a selected group of patients.
Laparoscopic pyelolithotomy: optimizing surgical technique.
Salvadó, José A; Guzmán, Sergio; Trucco, Cristian A; Parra, Claudio A
2009-04-01
The classic approach to renal stone disease includes shockwave lithotripsy, ureteroscopy or percutaneous nephrolithotripsy, and, in some cases, a combination of both. The usefulness of laparoscopy in this regard remains debated. In this report and video, we present our technique of laparoscopic pyelolithotomy assisted by flexible instrumentation to achieve maximal stone clearance in a selected group of patients. PMID:19358685
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
NASA Astrophysics Data System (ADS)
Jain, Narender Kumar; Nangia, Uma; Jain, Aishwary
2016-06-01
In this paper, multiobjective economic load dispatch (MELD) problem considering generation cost and transmission losses has been formulated using priority goal programming (PGP) technique. In this formulation, equality constraint has been considered by inclusion of penalty parameter K. It has been observed that fixing its value to 1,000 keeps the equality constraint within limits. The non-inferior set for IEEE 5, 14 and 30-bus systems has been generated by Particle Swarm Optimization (PSO) technique. The best compromise solution has been chosen as the one which gives equal percentage saving for both the objectives.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer.
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Evolutional Ant Colony Method Using PSO
NASA Astrophysics Data System (ADS)
Morii, Nobuto; Aiyoshi, Eitarou
The ant colony method is one of heuristic methods capable of solving the traveling salesman problem (TSP), in which a good tour is generated by the artificial ant's probabilistic behavior. However, the generated tour length depends on the parameter describing the ant's behavior, and the best parameters corresponding to the problem to be solved is unknown. In this technical note, the evolutional strategy is presented to find the best parameter of the ant colony by using Particle Swarm Optimization (PSO) in the parameter space. Numerical simulations for benchmarks demonstrate effectiveness of the evolutional ant colony method.
Particle Swarm Optimization Method Based on Chaotic Local Search and Roulette Wheel Mechanism
NASA Astrophysics Data System (ADS)
Xia, Xiaohua
Combining the particle swarm optimization (PSO) technique with the chaotic local search (CLS) and roulette wheel mechanism (RWM), an efficient optimization method solving the constrained nonlinear optimization problems is presented in this paper. PSO can be viewed as the global optimizer while the CLS and RWM are employed for the local search. Thus, the possibility of exploring a global minimum in problems with many local optima is increased. The search will continue until a termination criterion is satisfied. Benefit from the fast globally converging characteristics of PSO and the effective local search ability of CLS and RWM, the proposed method can obtain the global optimal results quickly which was tested for six benchmark optimization problems. And the improved performance comparing with the standard PSO and genetic algorithm (GA) testified its validity.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Language abstractions for low level optimization techniques
NASA Astrophysics Data System (ADS)
Dévai, Gergely; Gera, Zoltán; Kelemen, Zoltán
2012-09-01
In case of performance critical applications programmers are often forced to write code at a low abstraction level. This leads to programs that are hard to develop and maintain because the program text is mixed up by low level optimization tricks and is far from the algorithm it implements. Even if compilers are smart nowadays and provide the user with many automatically applied optimizations, practice shows that in some cases it is hopeless to optimize the program automatically without the programmer's knowledge. A complementary approach is to allow the programmer to fine tune the program but provide him with language features that make the optimization easier. These are language abstractions that make optimization techniques explicit without adding too much syntactic noise to the program text. This paper presents such language abstractions for two well-known optimizations: bitvectors and SIMD (Single Instruction Multiple Data). The language features are implemented in the embedded domain specific language Feldspar which is specifically tailored for digital signal processing applications. While we present these language elements as part of Feldspar, the ideas behind them are general enough to be applied in other language definition projects as well.
The contribution of particle swarm optimization to three-dimensional slope stability analysis.
Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis
A Rashid, Ahmad Safuan; Ali, Nazri
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2014-04-01
Much effort is devoted nowadays to derive accurate finite element (FE) models to be used for structural health monitoring, damage detection and assessment. However, formation of a FE model representative of the original structure is a difficult task. Model updating is a branch of optimization which calibrates the FE model by comparing the modal properties of the actual structure with these of the FE predictions. As the number of experimental measurements is usually much smaller than the number of uncertain parameters, and, consequently, not all uncertain parameters are selected for model updating, different local minima may exist in the solution space. Experimental noise further exacerbates the problem. The attainment of a global solution in a multi-dimensional search space is a challenging problem. Global optimization algorithms (GOAs) have received interest in the previous decade to solve this problem, but no GOA can ensure the detection of the global minimum either. To counter this problem, a combination of GOA with sequential niche technique (SNT) has been proposed in this research which systematically searches the whole solution space. A dynamically tested full scale pedestrian bridge is taken as a case study. Two different GOAs, namely particle swarm optimization (PSO) and genetic algorithm (GA), are investigated in combination with SNT. The results of these GOA are compared in terms of their efficiency in detecting global minima. The systematic search enables to find different solutions in the search space, thus increasing the confidence of finding the global minimum.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Cache Energy Optimization Techniques For Modern Processors
Mittal, Sparsh
2013-01-01
and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.
McDaniel, R D
1999-01-01
The Balanced Budget Act of 1997 established the new Medicare+Choice program which provides a variety of alternatives to traditional Medicare Part A and Part B, including the provider sponsored organization (PSO). Over the next several years, a significant number of organizations will consider becoming a PSO. The decision requires a thorough and detailed review of critical success factors. This articles outlines those factors and defines some components of a successful PSO.
Performance of Multi-chaotic PSO on a shifted benchmark functions set
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality. PMID:26989404
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
A PSO-PID quaternion model based trajectory control of a hexarotor UAV
NASA Astrophysics Data System (ADS)
Artale, Valeria; Milazzo, Cristina L. R.; Orlando, Calogero; Ricciardello, Angela
2015-12-01
A quaternion based trajectory controller for a prototype of an Unmanned Aerial Vehicle (UAV) is discussed in this paper. The dynamics of the UAV, a hexarotor in details, is described in terms of quaternion instead of the usual Euler angle parameterization. As UAV flight management concerns, the method here implemented consists of two main steps: trajectory and attitude control via Proportional-Integrative-Derivative (PID) and Proportional-Derivative (PD) technique respectively and the application of Particle Swarm Optimization (PSO) method in order to tune the PID and PD parameters. The optimization is the consequence of the minimization of a objective function related to the error with the respect to a proper trajectory. Numerical simulations support and validate the proposed method.
Introducing the fractional order robotic Darwinian PSO
NASA Astrophysics Data System (ADS)
Couceiro, Micael S.; Martins, Fernando M. L.; Rocha, Rui P.; Ferreira, Nuno M. F.
2012-11-01
The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm using fractional calculus concepts to control the convergence rate, while considering the robot dynamical characteristics. Moreover, to improve the convergence analysis of the RDPSO, an adjustment of the fractional coefficient based on mobile robot constraints is presented and experimentally assessed with 2 real platforms. Afterwards, this novel fractional-order RDPSO is evaluated in 12 physical robots being further explored using a larger population of 100 simulated mobile robots within a larger scenario. Experimental results show that changing the fractional coefficient does not significantly improve the final solution but presents a significant influence in the convergence time because of its inherent memory property.
Acoustic emission location on aluminum alloy structure by using FBG sensors and PSO method
NASA Astrophysics Data System (ADS)
Lu, Shizeng; Jiang, Mingshun; Sui, Qingmei; Dong, Huijun; Sai, Yaozhang; Jia, Lei
2016-04-01
Acoustic emission location is important for finding the structural crack and ensuring the structural safety. In this paper, an acoustic emission location method by using fiber Bragg grating (FBG) sensors and particle swarm optimization (PSO) algorithm were investigated. Four FBG sensors were used to form a sensing network to detect the acoustic emission signals. According to the signals, the quadrilateral array location equations were established. By analyzing the acoustic emission signal propagation characteristics, the solution of location equations was converted to an optimization problem. Thus, acoustic emission location can be achieved by using an improved PSO algorithm, which was realized by using the information fusion of multiple standards PSO, to solve the optimization problem. Finally, acoustic emission location system was established and verified on an aluminum alloy plate. The experimental results showed that the average location error was 0.010 m. This paper provided a reliable method for aluminum alloy structural acoustic emission location.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
Hepatic MR imaging techniques, optimization, and artifacts.
Guglielmo, Flavius F; Mitchell, Donald G; Roth, Christopher G; Deshmukh, Sandeep
2014-08-01
This article describes a basic 1.5-T hepatic magnetic resonance (MR) imaging protocol, strategies for optimizing pulse sequences while managing artifacts, the proper timing of postgadolinium 3-dimensional gradient echo sequences, and an effective order of performing pulse sequences with the goal of creating an efficient and high-quality hepatic MR imaging examination. The authors have implemented this general approach on General Electric, Philips, and Siemens clinical scanners.
Neural network training with global optimization techniques.
Yamazaki, Akio; Ludermir, Teresa B
2003-04-01
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Astrophysics Data System (ADS)
Hill, S. A.
1993-02-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
An RBF-PSO based approach for modeling prostate cancer
NASA Astrophysics Data System (ADS)
Perracchione, Emma; Stura, Ilaria
2016-06-01
Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.
Surface optimization technique for MammoSite breast brachytherapy applicator
Kirk, Michael . E-mail: Michael_C_Kirk@rush.edu; Hsi, W.C.; Dickler, Adam; Chu, James; Dowlatshahi, Kambiz; Francescatti, Darius; Nguyen, Cam
2005-06-01
Purpose: We present a technique to optimize the dwell times and positions of a high-dose-rate {sup 192}Ir source using the MammoSite breast brachytherapy applicator. The surface optimization method used multiple dwell positions and optimization points to conform the 100% isodose line to the surface of the planning target volume (PTV). Methods and materials: The study population consisted of 20 patients treated using the MammoSite device between October 2002 and February 2004. Treatment was delivered in 10 fractions of 3.4 Gy/fraction, twice daily, with a minimum of 6 h between fractions. The treatment of each patient was planned using three optimization techniques. The dosimetric characteristics of the single-point, six-point, and surface optimization techniques were compared. Results: The surface optimization technique increased the PTV coverage compared with the single- and six-point methods (mean percentage of PTV receiving 100% of the prescription dose was 94%, 85%, and 91%, respectively). The surface method, single-point, and six-point method had a mean dose homogeneity index of 0.62, 0.68, and 0.63 and a mean full width at half maximum value of 189, 190, and 192 cGy/fraction, respectively. Conclusion: The surface technique provided greater coverage of the PTV than did the single- and six-point methods. Using the FWHM method, the surface, single-, and six-point techniques resulted in equivalent dose homogeneity.
A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis.
Raja, Chandrasekaran; Gangatharan, Narayanan
2015-08-01
Glaucoma is among the most common causes of permanent blindness in human. Because the initial symptoms are not evident, mass screening would assist early diagnosis in the vast population. Such mass screening requires an automated diagnosis technique. Our proposed automation consists of pre-processing, optimal wavelet transformation, feature extraction, and classification modules. The hyper analytic wavelet transformation (HWT) based statistical features are extracted from fundus images. Because HWT preserves phase information, it is appropriate for feature extraction. The features are then classified by a Support Vector Machine (SVM) with a radial basis function (RBF) kernel. The filter coefficients of the wavelet transformation process and the SVM-RB width parameter are simultaneously tailored to best-fit the diagnosis by the hybrid Particle Swarm algorithm. To overcome premature convergence, a Group Search Optimizer (GSO) random searching (ranging) and area scanning behavior (around the optima) are embedded within the Particle Swarm Optimization (PSO) framework. We also embed a novel potential-area scanning as a preventive mechanism against premature convergence, rather than diagnosis and cure. This embedding does not compromise the generality and utility of PSO. In two 10-fold cross-validated test runs, the diagnostic accuracy of the proposed hybrid PSO exceeded that of conventional PSO. Furthermore, the hybrid PSO maintained the ability to explore even at later iterations, ensuring maturity in fitness. PMID:26093787
Optimization of detector positioning in the radioactive particle tracking technique.
Dubé, Olivier; Dubé, David; Chaouki, Jamal; Bertrand, François
2014-07-01
The radioactive particle tracking (RPT) technique is a non-intrusive experimental velocimetry and tomography technique extensively applied to the study of hydrodynamics in a great variety of systems. In this technique, arrays of scintillation detector are used to track the motion of a single radioactive tracer particle emitting isotropic γ-rays. This work describes and applies an optimization strategy developed to find an optimal set of positions for the scintillation detectors used in the RPT technique. This strategy employs the overall resolution of the detectors as the objective function and a mesh adaptive direct search (MADS) algorithm to solve the optimization problem. More precisely, NOMAD, a C++ implementation of the MADS algorithm is used. First, the optimization strategy is validated using simple cases with known optimal detector configurations. Next, it is applied to a three-dimensional axisymmetric system (i.e. a vertical cylinder, which could represent a fluidized bed, bubble column, riser or else). The results obtained using the optimization strategy are in agreement with what was previously recommended by Roy et al. (2002) for a similar system. Finally, the optimization strategy is used for a system consisting of a partially filled cylindrical tumbler. The application of insights gained by the optimization strategy is shown to lead to a significant reduction in the error made when reconstructing the position of a tracer particle. The results of this work show that the optimization strategy developed is sensitive to both the type of objective function used and the experimental conditions. The limitations and drawbacks of the optimization strategy are also discussed.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
Chang, Wei-Der
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system. PMID:18179066
Robinson, Y Harold; Rajaram, M
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Robinson, Y Harold; Rajaram, M
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique.
Process sequence optimization for digital microfluidic integration using EWOD technique
NASA Astrophysics Data System (ADS)
Yadav, Supriya; Joyce, Robin; Sharma, Akash Kumar; Sharma, Himani; Sharma, Niti Nipun; Varghese, Soney; Akhtar, Jamil
2016-04-01
Micro/nano-fluidic MEMS biosensors are the devices that detects the biomolecules. The emerging micro/nano-fluidic devices provide high throughput and high repeatability with very low response time and reduced device cost as compared to traditional devices. This article presents the experimental details for process sequence optimization of digital microfluidics (DMF) using "electrowetting-on-dielectric" (EWOD). Stress free thick film deposition of silicon dioxide using PECVD and subsequent process for EWOD techniques have been optimized in this work.
Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles
NASA Astrophysics Data System (ADS)
Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi
2012-09-01
In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Stochastic optimization techniques for NDE of bridges using vibration signatures
NASA Astrophysics Data System (ADS)
Yi, Jin-Hak; Feng, Maria Q.
2003-08-01
A baseline model updating is the first step for the model-based non destructive evaluation for civil infrastructures. Many researches have been drawn to obtain a more reliable baseline model. In this study, heuristic optimization techniques (or called as stochastic optimization techniques) including the genetic algorithm, the simulated annealing, and the tabu search, were have been investigated for constructing the reliable baseline model for an instrumented new highway bridge, and also were compared with the result of conventional sensitivity method. The preliminary finite element model of the bridge was successfully updated to a baseline model based on measured vibration data.
Discrete particle swarm optimization for identifying community structures in signed social networks.
Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng
2014-10-01
Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. PMID:24856248
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously
Optimization of CT colonography technique: a practical guide.
Tolan, D J M; Armstrong, E M; Burling, D; Taylor, S A
2007-09-01
In this article we provide practical advice for optimizing computed tomography colonography (CTC) technique to help ensure that reproducible, high-quality examinations are achieved. Relevant literature is reviewed and specific attention is paid to patient information, bowel cleansing, insufflation, anti-spasmodics, patient positioning, CT technique, post-procedure care and complications, as well as practical problem-solving advice. There are many different approaches to performing CTC; our aim is to not to provide a comprehensive review of the literature, but rather to present a practical and robust protocol, providing guidance, particularly to those clinicians with little prior experience of the technique.
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods.
A method to objectively optimize coral bleaching prediction techniques
NASA Astrophysics Data System (ADS)
van Hooidonk, R. J.; Huber, M.
2007-12-01
Thermally induced coral bleaching is a global threat to coral reef health. Methodologies, e.g. the Degree Heating Week technique, have been developed to predict bleaching induced by thermal stress by utilizing remotely sensed sea surface temperature (SST) observations. These techniques can be used as a management tool for Marine Protected Areas (MPA). Predictions are valuable to decision makers and stakeholders on weekly to monthly time scales and can be employed to build public awareness and support for mitigation. The bleaching problem is only expected to worsen because global warming poses a major threat to coral reef health. Indeed, predictive bleaching methods combined with climate model output have been used to forecast the global demise of coral reef ecosystems within coming decades due to climate change. Accuracy of these predictive techniques has not been quantitatively characterized despite the critical role they play. Assessments have typically been limited, qualitative or anecdotal, or more frequently they are simply unpublished. Quantitative accuracy assessment, using well established methods and skill scores often used in meteorology and medical sciences, will enable objective optimization of existing predictive techniques. To accomplish this, we will use existing remotely sensed data sets of sea surface temperature (AVHRR and TMI), and predictive values from techniques such as the Degree Heating Week method. We will compare these predictive values with observations of coral reef health and calculate applicable skill scores (Peirce Skill Score, Hit Rate and False Alarm Rate). We will (a) quantitatively evaluate the accuracy of existing coral reef bleaching predictive methods against state-of- the-art reef health databases, and (b) present a technique that will objectively optimize the predictive method for any given location. We will illustrate this optimization technique for reefs located in Puerto Rico and the US Virgin Islands.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
Model reduction using new optimal Routh approximant technique
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San
1992-01-01
An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.
Optimization of backward giant circle technique on the asymmetric bars.
Hiley, Michael J; Yeadon, Maurice R
2007-11-01
The release window for a given dismount from the asymmetric bars is the period of time within which release results in a successful dismount. Larger release windows are likely to be associated with more consistent performance because they allow a greater margin for error in timing the release. A computer simulation model was used to investigate optimum technique for maximizing release windows in asymmetric bars dismounts. The model comprised four rigid segments with the elastic properties of the gymnast and bar modeled using damped linear springs. Model parameters were optimized to obtain a close match between simulated and actual performances of three gymnasts in terms of rotation angle (1.5 degrees ), bar displacement (0.014 m), and release velocities (<1%). Three optimizations to maximize the release window were carried out for each gymnast involving no perturbations, 10-ms perturbations, and 20-ms perturbations in the timing of the shoulder and hip joint movements preceding release. It was found that the optimizations robust to 20-ms perturbations produced release windows similar to those of the actual performances whereas the windows for the unperturbed optimizations were up to twice as large. It is concluded that robustness considerations must be included in optimization studies in order to obtain realistic results and that elite performances are likely to be robust to timing perturbations of the order of 20 ms. PMID:18089928
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841
Techniques for developing reliability-oriented optimal microgrid architectures
NASA Astrophysics Data System (ADS)
Patra, Shashi B.
2007-12-01
Alternative generation technologies such as fuel cells, micro-turbines, solar etc. have been the focus of active research in the past decade. These energy sources are small and modular. Because of these advantages, these sources can be deployed effectively at or near locations where they are actually needed, i.e. in the distribution network. This is in contrast to the traditional electricity generation which has been "centralized" in nature. The new technologies can be deployed in a "distributed" manner. Therefore, they are also known as Distributed Energy Resources (DER). It is expected that the use of DER, will grow significantly in the future. Hence, it is prudent to interconnect the energy resources in a meshed or grid-like structure, so as to exploit the reliability and economic benefits of distributed deployment. These grids, which are smaller in scale but similar to the electric transmission grid, are known as "microgrids". This dissertation presents rational methods of building microgrids optimized for cost and subject to system-wide and locational reliability guarantees. The first method is based on dynamic programming and consists of determining the optimal interconnection between microsources and load points, given their locations and the rights of way for possible interconnections. The second method is based on particle swarm optimization. This dissertation describes the formulation of the optimization problem and the solution methods. The applicability of the techniques is demonstrated in two possible situations---design of a microgrid from scratch and expansion of an existing distribution system.
Demonstration of optimization techniques for groundwater plume remediation
Finsterle, Stefan
2000-09-01
We examined the potential use of standard optimization algorithms for the solution of aquifer remediation problems. Costs for the removal of dissolved or free-phase contaminants depend on aquifer properties, the chosen remediation technology, and operational parameters (such as number of wells drilled and pumping rates). A cost function must be formulated that may include actual costs and hypothetical penalty costs for incomplete cleanup; the total cost function is therefore a measure of the overall effectiveness and efficiency of the proposed remediation scenario. In this study, the cost function is minimized by automatically adjusting certain operational parameters. The impact of these operational parameters on remediation is evaluated using a state-of-the-art three-phase, three-component flow and transport simulator, which is linked to nonlinear optimization routines. The report demonstrates that methods developed for automatic model calibration are capable of minimizing arbitrary cost functions. Two illustrative examples are presented. While hypothetical, these examples demonstrate that remediation costs can be substantially lowered by combining simulation and optimization techniques. The second example on co-injection of air and steam also make evident the need for coupling optimization routines with an accurate state-of-the-art process simulator. Simplified models are likely to miss significant system behaviors such as increased downward mobilization due to recondensation of contaminants during steam flooding, which can be partly suppressed by the co-injection of air.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Optimization Techniques for 3D Graphics Deployment on Mobile Devices
NASA Astrophysics Data System (ADS)
Koskela, Timo; Vatjus-Anttila, Jarkko
2015-03-01
3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
High-level power analysis and optimization techniques
NASA Astrophysics Data System (ADS)
Raghunathan, Anand
1997-12-01
This thesis combines two ubiquitous trends in the VLSI design world--the move towards designing at higher levels of design abstraction, and the increasing importance of power consumption as a design metric. Power estimation and optimization tools are becoming an increasingly important part of design flows, driven by a variety of requirements such as prolonging battery life in portable computing and communication devices, thermal considerations and system cooling and packaging costs, reliability issues (e.g. electromigration, ground bounce, and I-R drops in the power network), and environmental concerns. This thesis presents a suite of techniques to automatically perform power analysis and optimization for designs at the architecture or register-transfer, and behavior or algorithm levels of the design hierarchy. High-level synthesis refers to the process of synthesizing, from an abstract behavioral description, a register-transfer implementation that satisfies the desired constraints. High-level synthesis tools typically perform one or more of the following tasks: transformations, module selection, clock selection, scheduling, and resource allocation and assignment (also called resource sharing or hardware sharing). High-level synthesis techniques for minimizing the area, maximizing the performance, and enhancing the testability of the synthesized designs have been investigated. This thesis presents high-level synthesis techniques that minimize power consumption in the synthesized data paths. This thesis investigates the effects of resource sharing on the power consumption in the data path, provides techniques to efficiently estimate power consumption during resource sharing, and resource sharing algorithms to minimize power consumption. The RTL circuit that is obtained from the high-level synthesis process can be further optimized for power by applying power-reducing RTL transformations. This thesis presents macro-modeling and estimation techniques for switching
Modiri, A; Gu, X; Sawant, A
2014-06-15
Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. These plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.
Optimized evaporation technique for leachate treatment: Small scale implementation.
Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz
2016-04-01
This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature.
Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di
2013-01-15
The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.
Application of multivariable search techniques to structural design optimization
NASA Technical Reports Server (NTRS)
Jones, R. T.; Hague, D. S.
1972-01-01
Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission
What is Particle Swarm optimization? Application to hydrogeophysics (Invited)
NASA Astrophysics Data System (ADS)
Fernández Martïnez, J.; García Gonzalo, E.; Mukerji, T.
2009-12-01
Inverse problems are generally ill-posed. This yields lack of uniqueness and/or numerical instabilities. These features cause local optimization methods without prior information to provide unpredictable results, not being able to discriminate among the multiple models consistent with the end criteria. Stochastic approaches to inverse problems consist in shifting attention to the probability of existence of certain interesting subsurface structures instead of "looking for a unique model". Some well-known stochastic methods include genetic algorithms and simulated annealing. A more recent method, Particle Swarm Optimization, is a global optimization technique that has been successfully applied to solve inverse problems in many engineering fields, although its use in geosciences is still limited. Like all stochastic methods, PSO requires reasonably fast forward modeling. The basic idea behind PSO is that each model searches the model space according to its misfit history and the misfit of the other models of the swarm. PSO algorithm can be physically interpreted as a damped spring-mass system. This physical analogy was used to define a whole family of PSO optimizers and to establish criteria, based on the stability of particle swarm trajectories, to tune the PSO parameters: inertia, local and global accelerations. In this contribution we show application to different low-cost hydrogeophysical inverse problems: 1) a salt water intrusion problem using Vertical Electrical Soundings, 2) the inversion of Spontaneous Potential data for groundwater modeling, 3) the identification of Cole-Cole parameters for Induced Polarization data. We show that with this stochastic approach we are able to answer questions related to risk analysis, such as what is the depth of the salt intrusion with a certain probability, or giving probabilistic bounds for the water table depth. Moreover, these measures of uncertainty are obtained with small computational cost and time, allowing us a very
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection
Li, Xiaoli
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection. PMID:27635151
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection.
Li, Xiaoli; Li, Chengwei
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection. PMID:27635151
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection
Li, Xiaoli
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection.
FPGA implementation of neuro-fuzzy system with improved PSO learning.
Karakuzu, Cihan; Karakaya, Fuat; Çavuşlu, Mehmet Ali
2016-07-01
This paper presents the first hardware implementation of neuro-fuzzy system (NFS) with its metaheuristic learning ability on field programmable gate array (FPGA). Metaheuristic learning of NFS for all of its parameters is accomplished by using the improved particle swarm optimization (iPSO). As a second novelty, a new functional approach, which does not require any memory and multiplier usage, is proposed for the Gaussian membership functions of NFS. NFS and its learning using iPSO are implemented on Xilinx Virtex5 xc5vlx110-3ff1153 and efficiency of the proposed implementation tested on two dynamic system identification problems and licence plate detection problem as a practical application. Results indicate that proposed NFS implementation and membership function approximation is as effective as the other approaches available in the literature but requires less hardware resources. PMID:27136666
FPGA implementation of neuro-fuzzy system with improved PSO learning.
Karakuzu, Cihan; Karakaya, Fuat; Çavuşlu, Mehmet Ali
2016-07-01
This paper presents the first hardware implementation of neuro-fuzzy system (NFS) with its metaheuristic learning ability on field programmable gate array (FPGA). Metaheuristic learning of NFS for all of its parameters is accomplished by using the improved particle swarm optimization (iPSO). As a second novelty, a new functional approach, which does not require any memory and multiplier usage, is proposed for the Gaussian membership functions of NFS. NFS and its learning using iPSO are implemented on Xilinx Virtex5 xc5vlx110-3ff1153 and efficiency of the proposed implementation tested on two dynamic system identification problems and licence plate detection problem as a practical application. Results indicate that proposed NFS implementation and membership function approximation is as effective as the other approaches available in the literature but requires less hardware resources.
Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy
NASA Astrophysics Data System (ADS)
Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre
An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.
Techniques for developing approximate optimal advanced launch system guidance
NASA Technical Reports Server (NTRS)
Feeley, Timothy S.; Speyer, Jason L.
1991-01-01
An extension to the authors' previous technique used to develop a real-time guidance scheme for the Advanced Launch System is presented. The approach is to construct an optimal guidance law based upon an asymptotic expansion associated with small physical parameters, epsilon. The trajectory of a rocket modeled as a point mass is considered with the flight restricted to an equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of this problem can be separated into primary effects due to thrust and gravitational forces, and perturbation effects which include the aerodynamic forces and the remaining inertial forces. An analytic solution to the reduced-order problem represented by the primary dynamics is possible. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form.
Review of optimization techniques of polygeneration systems for building applications
NASA Astrophysics Data System (ADS)
Y, Rong A.; Y, Su; R, Lahdelma
2016-08-01
Polygeneration means simultaneous production of two or more energy products in a single integrated process. Polygeneration is an energy-efficient technology and plays an important role in transition into future low-carbon energy systems. It can find wide applications in utilities, different types of industrial sectors and building sectors. This paper mainly focus on polygeneration applications in building sectors. The scales of polygeneration systems in building sectors range from the micro-level for a single home building to the large- level for residential districts. Also the development of polygeneration microgrid is related to building applications. The paper aims at giving a comprehensive review for optimization techniques for designing, synthesizing and operating different types of polygeneration systems for building applications.
On improving storm surge forecasting using an adjoint optimal technique
NASA Astrophysics Data System (ADS)
Li, Yineng; Peng, Shiqiu; Yan, Jing; Xie, Lian
2013-12-01
A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible.
Optimal exposure techniques for iodinated contrast enhanced breast CT
NASA Astrophysics Data System (ADS)
Glick, Stephen J.; Makeev, Andrey
2016-03-01
Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Optimization technique for problems with an inequality constraint
NASA Technical Reports Server (NTRS)
Russell, K. J.
1972-01-01
General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.
Technique to optimize magnetic response of gelatin coated magnetic nanoparticles.
Parikh, Nidhi; Parekh, Kinnari
2015-07-01
The paper describes the results of optimization of magnetic response for highly stable bio-functionalize magnetic nanoparticles dispersion. Concentration of gelatin during in situ co-precipitation synthesis was varied from 8, 23 and 48 mg/mL to optimize magnetic properties. This variation results in a change in crystallite size from 10.3 to 7.8 ± 0.1 nm. TEM measurement of G3 sample shows highly crystalline spherical nanoparticles with a mean diameter of 7.2 ± 0.2 nm and diameter distribution (σ) of 0.27. FTIR spectra shows a shift of 22 cm(-1) at C=O stretching with absence of N-H stretching confirming the chemical binding of gelatin on magnetic nanoparticles. The concept of lone pair electron of the amide group explains the mechanism of binding. TGA shows 32.8-25.2% weight loss at 350 °C temperature substantiating decomposition of chemically bind gelatin. The magnetic response shows that for 8 mg/mL concentration of gelatin, the initial susceptibility and saturation magnetization is the maximum. The cytotoxicity of G3 sample was assessed in Normal Rat Kidney Epithelial Cells (NRK Line) by MTT assay. Results show an increase in viability for all concentrations, the indicative probability of a stimulating action of these particles in the nontoxic range. This shows the potential of this technique for biological applications as the coated particles are (i) superparamagnetic (ii) highly stable in physiological media (iii) possibility of attaching other drug with free functional group of gelatin and (iv) non-toxic.
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutions and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.
[Research on living tree volume forecast based on PSO embedding SVM].
Jiao, You-Quan; Feng, Zhong-Ke; Zhao, Li-Xi; Xu, Wei-Heng; Cao, Zhong
2014-01-01
In order to establish volume model,living trees have to be fallen and be divided into many sections, which is a kind of destructive experiment. So hundreds of thousands of trees have been fallen down each year in China. To solve this problem, a new method called living tree volume accurate measurement without falling tree was proposed in the present paper. In the method, new measuring methods and calculation ways are used by using photoelectric theodolite and auxiliary artificial measurement. The diameter at breast height and diameter at ground was measured manually, and diameters at other heights were obtained by photoelectric theodolite. Tree volume and height of each tree was calculated by a special software that was programmed by the authors. Zhonglin aspens No. 107 were selected as experiment object, and 400 data records were obtained. Based on these data, a nonlinear intelligent living tree volume prediction model with Particle Swarm Optimization algorithm based on support vector machines (PSO-SVM) was established. Three hundred data records including tree height and diameter at breast height were randomly selected form a total of 400 data records as input data, tree volume as output data, using PSO-SVM tool box of Matlab7.11, thus a tree volume model was obtained. One hundred data records were used to test the volume model. The results show that the complex correlation coefficient (R2) between predicted and measured values is 0. 91, which is 2% higher than the value calculated by classic Spurr binary volume model, and the mean absolute error rates were reduced by 0.44%. Compared with Spurr binary volume model, PSO-SVM model has self-learning and self-adaption ability,moreover, with the characteristics of high prediction accuracy, fast learning speed,and a small sample size requirement, PSO-SVM model with well prospect is worth popularization and application.
Optimization of fast dissolving etoricoxib tablets prepared by sublimation technique.
Patel, D M; Patel, M M
2008-01-01
The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 3(2) full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q(30)) and dissolution efficiency after 30 min (DE(30)). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent.
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110... subpart and for these purposes will be allowed to inspect the physical or virtual sites maintained...
PSO based PI controller design for a solar charger system.
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs). PMID:23766713
PSO Based PI Controller Design for a Solar Charger System
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs). PMID:23766713
PSO based PI controller design for a solar charger system.
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs).
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-01-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng
2016-02-01
Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.
Perceptual Dominant Color Extraction by Multidimensional Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Kiranyaz, Serkan; Uhlmann (Eurasip Member), Stefan; Ince, Turker; Gabbouj, Moncef
2010-12-01
Color is the major source of information widely used in image analysis and content-based retrieval. Extracting dominant colors that are prominent in a visual scenery is of utmost importance since the human visual system primarily uses them for perception and similarity judgment. In this paper, we address dominant color extraction as a dynamic clustering problem and use techniques based on Particle Swarm Optimization (PSO) for finding optimal (number of) dominant colors in a given color space, distance metric and a proper validity index function. The first technique, so-called Multidimensional (MD) PSO can seek both positional and dimensional optima. Nevertheless, MD PSO is still susceptible to premature convergence due to lack of divergence. To address this problem we then apply Fractional Global Best Formation (FGBF) technique. In order to extract perceptually important colors and to further improve the discrimination factor for a better clustering performance, an efficient color distance metric, which uses a fuzzy model for computing color (dis-) similarities over HSV (or HSL) color space is proposed. The comparative evaluations against MPEG-7 dominant color descriptor show the superiority of the proposed technique.
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
Optimization techniques in molecular structure and function elucidation.
Sahinidis, Nikolaos V
2009-12-01
This paper discusses recent optimization approaches to the protein side-chain prediction problem, protein structural alignment, and molecular structure determination from X-ray diffraction measurements. The machinery employed to solve these problems has included algorithms from linear programming, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of these problems are purely continuous in nature. Yet, to this date, they have been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. The main purpose of the paper is to offer an introduction and motivate further systems approaches to these problems. PMID:20160866
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
NASA Astrophysics Data System (ADS)
Lin, Juan; Liu, Chenglian; Guo, Yongning
2014-10-01
The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.
Optimizing Basic French Skills Utilizing Multiple Teaching Techniques.
ERIC Educational Resources Information Center
Skala, Carol
This action research project examined the impact of foreign language teaching techniques on the language acquisition and retention of 19 secondary level French I students, focusing on student perceptions of the effectiveness and ease of four teaching techniques: total physical response, total physical response storytelling, literature approach,…
Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.
2000-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local
Cluster LEDs mixing optimization by lens design techniques.
Chien, Ming-Chin; Tien, Chung-Hao
2011-07-01
This paper presents a methodology analogous to a general lens design rule to optimize step-by-step the spectral power distribution of a white-light LED cluster with the highest possible color rendering and efficiency in a defined range of color temperatures. By examining a platform composed of four single-color LEDs and a phosphor-converted cool-white (CW) LED, we successfully validate the proposed algorithm and suggest the optimal operation range (correlated color temperature = 2600-8500 K) accompanied by a high color quality scale (CQS > 80 points) as well as high luminous efficiency (97% of cluster's theoretical maximum value).
Optimal Use of Wire-Assisted Techniques and Precut Sphincterotomy
Lee, Tae Hoon; Park, Sang-Heum
2016-01-01
Various endoscopic techniques have been developed to overcome the difficulties in biliary or pancreatic access during endoscopic retrograde cholangiopancreatography, according to the preference of the endoscopist or the aim of the procedures. In terms of endoscopic methods, guidewire-assisted cannulation is a commonly used and well-known initial cannulation technique, or an alternative in cases of difficult cannulation. In addition, precut sphincterotomy encompasses a range of available rescue techniques, including conventional precut, precut fistulotomy, transpancreatic septotomy, and precut after insertion of pancreatic stent or pancreatic duct guidewire-guided septal precut. We present a literature review of guidewire-assisted cannulation as a primary endoscopic method and the precut technique for the facilitation of selective biliary access. PMID:27642848
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Support vector machine based on adaptive acceleration particle swarm optimization.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Optimization techniques for OpenCL-based linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Fox, Paul; Humphrey, John; Kuller, Aryeh; Kelmelis, Eric; Prather, Dennis W.
2014-06-01
The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS
Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
Asynchronous global optimization techniques for medium and large inversion problems
Pereyra, V.; Koshy, M.; Meza, J.C.
1995-04-01
We discuss global optimization procedures adequate for seismic inversion problems. We explain how to save function evaluations (which may involve large scale ray tracing or other expensive operations) by creating a data base of information on what parts of parameter space have already been inspected. It is also shown how a correct parallel implementation using PVM speeds up the process almost linearly with respect to the number of processors, provided that the function evaluations are expensive enough to offset the communication overhead.
Particle swarm optimization applied to impulsive orbital transfers
NASA Astrophysics Data System (ADS)
Pontani, Mauro; Conway, Bruce A.
2012-05-01
The particle swarm optimization (PSO) technique is a population-based stochastic method developed in recent years and successfully applied in several fields of research. It mimics the unpredictable motion of bird flocks while searching for food, with the intent of determining the optimal values of the unknown parameters of the problem under consideration. At the end of the process, the best particle (i.e. the best solution with reference to the objective function) is expected to contain the globally optimal values of the unknown parameters. The central idea underlying the method is contained in the formula for velocity updating. This formula includes three terms with stochastic weights. This research applies the particle swarm optimization algorithm to the problem of optimizing impulsive orbital transfers. More specifically, the following problems are considered and solved with the PSO algorithm: (i) determination of the globally optimal two- and three-impulse transfer trajectories between two coplanar circular orbits; (ii) determination of the optimal transfer between two coplanar, elliptic orbits with arbitrary orientation; (iii) determination of the optimal two-impulse transfer between two circular, non-coplanar orbits; (iv) determination of the globally optimal two-impulse transfer between two non-coplanar elliptic orbits. Despite its intuitiveness and simplicity, the particle swarm optimization method proves to be capable of effectively solving the orbital transfer problems of interest with great numerical accuracy.
An Optimal Cell Detection Technique for Automated Patch Clamping
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2004-01-01
While there are several hardware techniques for the automated patch clamping of cells that describe the equipment apparatus used for patch clamping, very few explain the science behind the actual technique of locating the ideal cell for a patch clamping procedure. We present a machine vision approach to patch clamping cell selection by developing an intelligent algorithm technique that gives the user the ability to determine the good cell to patch clamp in an image within one second. This technique will aid the user in determining the best candidates for patch clamping and will ultimately save time, increase efficiency and reduce cost. The ultimate goal is to combine intelligent processing with instrumentation and controls in order to produce a complete turnkey automated patch clamping system capable of accurately and reliably patch clamping cells with a minimum amount of human intervention. We present a unique technique that identifies good patch clamping cell candidates based on feature metrics of a cell's (x, y) position, major axis length, minor axis length, area, elongation, roundness, smoothness, angle of orientation, thinness and whether or not the cell is only particularly in the field of view. A patent is pending for this research.
76 FR 7854 - Patient Safety Organizations: Voluntary Delisting From Quality Excellence, Inc./PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... Delisting From Quality Excellence, Inc./PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of Delisting. SUMMARY: Quality Excellence Inc./PSO: AHRQ has accepted a notification of voluntary relinquishment from Quality Excellence Inc./PSO, a component entity of Arkansas Foundation...
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312
Towards the novel reasoning among particles in PSO by the use of RDF and SPARQL.
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez; Fister, Iztok
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further. PMID:24987725
Pourjafari, Ebrahim; Mojallali, Hamed
2011-04-01
Voltage stability is one of the most challenging concerns that power utilities are confronted with, and this paper proposes a voltage control scheme based on Model Predictive Control (MPC) to overcome this kind of instability. Voltage instability has a close relation with the adequacy of reactive power and the response of Under Load Tap Changers (ULTCs) to the voltage drop after the occurrence of a contingency. Therefore, the proposed method utilizes reactive power injection and tap changing to avoid voltage collapse. Considering discrete nature of the changes in the tap ratio and also in the reactive power injected by capacitor banks, the search area for the optimizer of MPC will be an integer area; consequently, a modified discrete multi-valued Particle Swarm Optimization (PSO) is considered to perform this optimization. Simulation results of applying the proposed control scheme to a 4-bus system confirm its capability to prevent voltage collapse. PMID:21251650
Towards the novel reasoning among particles in PSO by the use of RDF and SPARQL.
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez; Fister, Iztok
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further.
Towards the Novel Reasoning among Particles in PSO by the Use of RDF and SPARQL
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further. PMID:24987725
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO).
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Coordination and Control of Multiple Spacecraft using Convex Optimization Techniques
NASA Astrophysics Data System (ADS)
How, Jonathan P.
2002-06-01
Formation flying of multiple spacecraft is an enabling technology for many future space science missions. These future missions will, for example, use the highly coordinated, distributed array of vehicles for earth mapping interferometers and synthetic aperture radar. This thesis presents coordination and control algorithms designed for a fleet of spacecraft. These algorithms are embedded in a hierarchical fleet archi- tecture that includes a high-level coordinator for the fleet maneuvers used to form, re-size, or re-target the formation configuration and low-level controllers to generate and implement the individual control inputs for each vehicle. The trajectory and control problems are posed as linear programming (LP) optimizations to solve for the minimum fuel maneuvers. The combined result of the high-level coordination and low-level controllers is a very flexible optimization framework that can be used off-line to analyze aspects of a mission design and in real-time as part of an on-board autonomous formation flying control system. This thesis also investigates several crit- ical issues associated with the implementation of this formation flying approach. In particular, modifications to the LP algorithms are presented to: include robustness to sensor noise, include actuator constraints, ensure that the optimization solutions are always feasible, and reduce the LP solution times. Furthermore, the dynamics for the control problem are analyzed in terms of two key issues: 1) what dynamics model should be used to specify the desired state to maintain a passive aperture; and 2) what dynamics model should be used in the LP to represent the motion about this state. Several linearized models of the relative dynamics are considered in this analysis, including Hill's equations for circular orbits, modified linear dynamics that partially account for the J2 effects, and Lawden's equations for eccentric orbits.
Searching for Planets using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Chambers, John E.
2008-05-01
The Doppler radial velocity technique has been highly successful in discovering planetary-mass companions in orbit around nearby stars. A typical data set contains around one hundred instantaneous velocities for the star, spread over a period of several years,with each observation measuring only the radial component of velocity. From this data set, one would like to determine the masses and orbital parameters of the system of planets responsible for the star's reflex motion. Assuming coplanar orbits, each planet is characterized by five parameters, with an additional parameter for each telescope used to make observations, representing the instrument's velocity offset. The large number of free parameters and the relatively sparse data sets make the fitting process challenging when multiple planets are present, especially if some of these objects have low masses. Conventional approaches using periodograms often perform poorly when the orbital periods are not separated by large amounts or the longest period is comparable to the length of the data set. Here, I will describe a new approach to fitting Doppler radial velocity sets using particle swarm optimization (PSO). I will describe how the PSO method works, and show examples of PSO fits to existing radial velocity data sets, with comparisons to published solutions and those submitted to the Systemic website (http://www.oklo.org).
Decomposition technique and optimal trajectories for the aeroassisted flight experiment
NASA Technical Reports Server (NTRS)
Miele, A.; Wang, T.; Deaton, A. W.
1990-01-01
An actual geosynchronous Earth orbit-to-low Earth orbit (GEO-to-LEO) transfer is considered with reference to the aeroassisted flight experiment (AFE) spacecraft, and optimal trajectories are determined by minimizing the total characteristic velocity. The optimization is performed with respect to the time history of the controls (angle of attack and angle of bank), the entry path inclination and the flight time being free. Two transfer maneuvers are considered: direct ascent (DA) to LEO and indirect ascent (IA) to LEO via parking Earth orbit (PEO). By taking into account certain assumptions, the complete system can be decoupled into two subsystems: one describing the longitudinal motion and one describing the lateral motion. The angle of attack history, the entry path inclination, and the flight time are determined via the longitudinal motion subsystem. In this subsystem, the difference between the instantaneous bank angle and a constant bank angle is minimized in the least square sense subject to the specified orbital inclination requirement. Both the angles of attack and the angle of bank are shown to be constant. This result has considerable importance in the design of nominal trajectories to be used in the guidance of AFE and aeroassisted orbital transfer (AOT) vehicles.
Decomposition technique and optimal trajectories for the aeroassisted flight experiment
NASA Technical Reports Server (NTRS)
Miele, A.; Wang, T.; Deaton, A. W.
1991-01-01
An actual geosynchronous earth orbit-to-low earth orbit (GEO-to-LEO) transfer is considered with reference to the aeroassisted flight experiment (AFE) spacecraft, and optimal trajectories are determined by minimizing the total characteristic velocity. The optimization is performed with respect to the time history of the controls (angle of attack and angle of bank), the entry path inclination and the flight time being free. Two transfer maneuvers are considered: direct ascent (DA) to LEO and indirect ascent (IA) to LEO via parking earth orbit (PEO). By taking into account certain assumptions, the complete system can be decoupled into two subsystems: one describing the longitudinal motion and one describing the lateral motion. The angle of attack history, the entry path inclination, and the flight time are determined via the longitudinal motion subsystem. In this subsystem, the difference between the instantaneous bank angle and a constant bank angle is minimized in the least square sense subject to the specified orbital inclination requirement. Both the angles of attack and the angle of bank are shown to be constant. This result has considerable importance in the design of nominal trajectories to be used in the guidance of AFE and aeroassisted orbital transfer (AOT) vehicles.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
A technique for optimizing the design of power semiconductor devices
NASA Technical Reports Server (NTRS)
Schlegel, E. S.
1976-01-01
A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Finger joint force minimization in pianists using optimization techniques.
Harding, D C; Brandt, K D; Hillberry, B M
1993-12-01
A numerical optimization procedure was used to determine finger positions that minimize and maximize finger tendon and joint force objective functions during piano play. A biomechanical finger model for sagittal plane motion, based on finger anatomy, was used to investigate finger tendon tensions and joint reaction forces for finger positions used in playing the piano. For commonly used piano key strike positions, flexor and intrinsic muscle tendon tensions ranged from 0.7 to 3.2 times the fingertip key strike force, while resultant inter-joint compressive forces ranged from 2 to 7 times the magnitude of the fingertip force. In general, use of a curved finger position, with a large metacarpophalangeal joint flexion angle and a small proximal interphalangeal joint flexion angle, reduces flexor tendon tension and resultant finger joint force.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
NASA Astrophysics Data System (ADS)
Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.
2016-10-01
In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.
Preliminary research on abnormal brain detection by wavelet-energy and quantum- behaved PSO.
Zhang, Yudong; Ji, Genlin; Yang, Jiquan; Wang, Shuihua; Dong, Zhengchao; Phillips, Preetha; Sun, Ping
2016-04-29
It is important to detect abnormal brains accurately and early. The wavelet-energy (WE) was a successful feature descriptor that achieved excellent performance in various applications; hence, we proposed a WE based new approach for automated abnormal detection, and reported its preliminary results in this study. The kernel support vector machine (KSVM) was used as the classifier, and quantum-behaved particle swarm optimization (QPSO) was introduced to optimize the weights of the SVM. The results based on a 5 × 5-fold cross validation showed the performance of the proposed WE + QPSO-KSVM was superior to ``DWT + PCA + BP-NN'', ``DWT + PCA + RBF-NN'', ``DWT + PCA + PSO-KSVM'', ``WE + BPNN'', ``WE +$ KSVM'', and ``DWT $+$ PCA $+$ GA-KSVM'' w.r.t. sensitivity, specificity, and accuracy. The work provides a novel means to detect abnormal brains with excellent performance. PMID:27163327
Preliminary research on abnormal brain detection by wavelet-energy and quantum- behaved PSO.
Zhang, Yudong; Ji, Genlin; Yang, Jiquan; Wang, Shuihua; Dong, Zhengchao; Phillips, Preetha; Sun, Ping
2016-04-29
It is important to detect abnormal brains accurately and early. The wavelet-energy (WE) was a successful feature descriptor that achieved excellent performance in various applications; hence, we proposed a WE based new approach for automated abnormal detection, and reported its preliminary results in this study. The kernel support vector machine (KSVM) was used as the classifier, and quantum-behaved particle swarm optimization (QPSO) was introduced to optimize the weights of the SVM. The results based on a 5 × 5-fold cross validation showed the performance of the proposed WE + QPSO-KSVM was superior to ``DWT + PCA + BP-NN'', ``DWT + PCA + RBF-NN'', ``DWT + PCA + PSO-KSVM'', ``WE + BPNN'', ``WE +$ KSVM'', and ``DWT $+$ PCA $+$ GA-KSVM'' w.r.t. sensitivity, specificity, and accuracy. The work provides a novel means to detect abnormal brains with excellent performance.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters.
PSO-based methods for medical image registration and change assessment of pigmented skin
NASA Astrophysics Data System (ADS)
Kacenjar, Steve; Zook, Matthew; Balint, Michael
2011-03-01
's back topography. Since the skin is a deformable membrane, this process only provides an initial condition for subsequent refinements in aligning the localized topography of the skin. To achieve a refined enhancement, a Particle Swarm Optimizer (PSO) is used to optimally determine the local camera models associated with a generalized geometric transform. Here the optimization process is driven using the minimization of entropy between the multiple time-separated images. Once the camera models are corrected for local skin deformations, the images are compared using both pixel-based and regional-based methods. Limits on the detectability of change are established by the fidelity to which the algorithm corrects for local skin deformation and background alterations. These limits provide essential information in establishing early-warning thresholds for Melanoma detection. Key to this work is the development of a PSO alignment algorithm to perform the refined alignment in local skin topography between the time sequenced imagery (TSI). Test and validation of this alignment process is achieved using a forward model producing known geometric artifacts in the images and afterwards using a PSO algorithm to demonstrate the ability to identify and correct for these artifacts. Specifically, the forward model introduces local translational, rotational, and magnification changes within the image. These geometric modifiers are expected during TSI acquisition because of logistical issues to precisely align the patient to the image recording geometry and is therefore of paramount importance to any viable image registration system. This paper shows that the PSO alignment algorithm is effective in autonomously determining and mitigating these geometric modifiers. The degree of efficacy is measured by several statistically and morphologically based pre-image filtering operations applied to the TSI imagery before applying the PSO alignment algorithm. These trade studies show that global
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Calculation of free fall trajectories based on numerical optimization techniques
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a means of computing free-fall (nonthrusting) trajectories from one specified point in the solar system to another specified point in the solar system in a given amount of time was studied. The problem is that of solving a two-point boundary value problem for which the initial slope is unknown. Two standard methods of attack exist for solving two-point boundary value problems. The first method is known as the initial value or shooting method. The second method of attack for two-point boundary value problems is to approximate the nonlinear differential equations by an appropriate linearized set. Parts of both boundary value problem solution techniques described above are used. A complete velocity history is guessed such that the corresponding position history satisfies the given boundary conditions at the appropriate times. An iterative procedure is then followed until the last guessed velocity history and the velocity history obtained from integrating the acceleration history agree to some specified tolerance everywhere along the trajectory.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-01-01
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-01-01
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-08-05
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.
Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan
2016-01-01
An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C₆H₆), toluene (C₇H₈), formaldehyde (CH₂O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms' applications in all E-nose application areas. PMID
Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan
2016-01-01
An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Fournier, René; Mohareb, Amir
2016-01-14
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV. PMID:26772561
NASA Astrophysics Data System (ADS)
Fournier, René; Mohareb, Amir
2016-01-01
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.
Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique
NASA Astrophysics Data System (ADS)
Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.
2014-04-01
3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.
Hashim, H. A.; Abido, M. A.
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
Partial-transfer absorption imaging: a versatile technique for optimal imaging of ultracold gases.
Ramanathan, Anand; Muniz, Sérgio R; Wright, Kevin C; Anderson, Russell P; Phillips, William D; Helmerson, Kristian; Campbell, Gretchen K
2012-08-01
Partial-transfer absorption imaging is a tool that enables optimal imaging of atomic clouds for a wide range of optical depths. In contrast to standard absorption imaging, the technique can be minimally destructive and can be used to obtain multiple successive images of the same sample. The technique involves transferring a small fraction of the sample from an initial internal atomic state to an auxiliary state and subsequently imaging that fraction absorptively on a cycling transition. The atoms remaining in the initial state are essentially unaffected. We demonstrate the technique, discuss its applicability, and compare its performance as a minimally destructive technique to that of phase-contrast imaging.
Mujtaba, I.M.; Macchietto, S.
1997-06-01
A computationally efficient framework is presented for dynamic optimization of batch distillation where chemical reaction and separation take place simultaneously. An objective to maximize the conversion of the limiting reactant dynamic optimization problem (maximum conversion problem) is formulated for a representative system, and parametric solutions of the problem are obtained. Polynomial curve fitting techniques are then applied to the results of the dynamic optimization problem. These polynomials are used to formulate a nonlinear algebraic maximum profit problem which can be solved extremely efficiently using a nonlinear optimization solver. This provides an efficient framework which can be used for on-line optimization of batch distillation within scheduling programs for batch processes. The method can also be easily extended to nonreactive batch distillation and to nonconventional batch distillation columns.
Srinivasan, Thenmozhi; Palanisamy, Balasubramanie
2015-01-01
Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. PMID:26495413
NASA Astrophysics Data System (ADS)
Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji
2016-07-01
This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
NASA Astrophysics Data System (ADS)
Li, Bincheng; Welsch, Eberhard
1999-04-01
Photothermal techniques, such as probe beam deflection and thermal lens detection, have been widely used for low absorption measurement, thermal characterization, and laser- induced damage detection of optical coatings. In specially configured photothermal techniques, the probe beam either detects the photothermally induced refractive index change inside the sample via propagation through the interacting region in the measured sample, or detects the surface displacement via reflection from the deformed surface. Usually, due to the very low absorption of the sample or/and the short interaction length, a very high sensitivity is required for such applications. It is therefore of importance to maximize the sensitivity for each measurement, by selecting appropriate detection scheme and optimizing the performance of the selected scheme. In this paper, we first maximize the sensitivity of these photothermal techniques by configuration optimization, then compare their maximum sensitivity. The applicability of the pulsed photothermal techniques to optical coating characterization is also discussed.
A technique optimization protocol and the potential for dose reduction in digital mammography
Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan
2010-01-01
Digital mammography requires revisiting techniques that have been optimized for prior screen∕film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat NovationDR, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23–35 kVp), target∕filter combinations (Mo–Mo and W–Rh), breast-equivalent plastic in various thicknesses (2–8 cm) and densities (100% adipose, 50% adipose∕50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W–Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W–Rh technique compared to standard Mo–Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose∕50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts. PMID:20384232
A technique optimization protocol and the potential for dose reduction in digital mammography
Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan
2010-03-15
Digital mammography requires revisiting techniques that have been optimized for prior screen/film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat Novation{sup DR}, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23-35 kVp), target/filter combinations (Mo-Mo and W-Rh), breast-equivalent plastic in various thicknesses (2-8 cm) and densities (100% adipose, 50% adipose/50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W-Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W-Rh technique compared to standard Mo-Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose/50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts.
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
Particle Swarm Optimization with Double Learning Patterns.
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Particle Swarm Optimization with Double Learning Patterns
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Human behavior-based particle swarm optimization.
Liu, Hao; Xu, Gang; Ding, Gui-Yan; Sun, Yu-Bo
2014-01-01
Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c 1 and c 2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.
A Preliminary Evaluation of an Optimizing Technique for Use in Selecting New School Locations.
ERIC Educational Resources Information Center
Hall, Fred L.
During the past two decades, mathematical programing techniques have been widely utilized in the private sector for optimization studies in locating industrial plants, scheduling commodity flows, determining product mix, etc. However, their use in the public sector has been less extensive, partly because of the absence of a clear-cut profit motive…
Optimum Design of Aluminum Beverage Can Ends Using Structural Optimization Techniques
Yamazaki, Koetsu; Watanabe, Masato; Itoh, Ryouiti; Han, Jing; Nishiyama, Sadao
2005-08-05
This paper has tried to apply the response surface approximate method in the structural optimization techniques to develop aluminum beverage can ends. Geometrical parameters of the end shell are selected as design variables. The analysis points in the design space are assigned using an orthogonal array in the design-of-experiment technique. Finite element analysis code is used to simulate the deforming behavior and to calculate buckling strength and central panel displacement of the end shell under internal pressure. On the basis of the numerical analysis results, the response surface of the buckling strength and panel growth are approximated in terms of the design variables. By using a numerical optimization program, the weight of the end shell is minimized subject to constraints of the buckling strength, panel growth suppression and other design requirements. A numerical example on 202 end shell optimization problem has been shown in this paper.
Optimal regulator or conventional? Setup techniques for a model following simulator control system
NASA Technical Reports Server (NTRS)
Deets, D. A.
1978-01-01
Optimal regulator technique was compared for determining simulator control system gains with the conventional servo analysis approach. Practical considerations, associated with airborne motion simulation using a model-following system, provided the basis for comparison. The simulation fidelity specifications selected were important in evaluating the relative advantages of the two methods. Frequency responses for a JetStar aircraft following a roll mode model were calculated digitally to illustrate the various cases. A technique for generating forward loop lead in the optimal regulator model-following problem was developed which increases the flexibility of that approach. It appeared to be the only way in which the optimal regulator method could meet the fidelity specifications.
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
NASA Astrophysics Data System (ADS)
Lamberson, Steven; Crossley, William
2008-10-01
This research investigates the optimization of a multifunctional structure with embedded electronic circuitry, following traditional composite laminate optimization methods. A heavily 'de-featured' finite element model provides thermal and mechanical analyses of the structure. The model places point heat sources at the surface component locations, and the optimization problem enforces strain constraints at these locations. A simple problem seeks the least-mass I-beam whose shear web contains a simple circuit, subject to strength and strain constraints. A second problem finds the lowest mass unmanned aerial vehicle (UAV) wing box configuration containing embedded circuitry subject to strength, deflection and strain constraints under two load cases. Sequential unconstrained minimization techniques and sequential quadratic programming perform the optimization; combinatorial methods are computationally impractical. Despite the model de-featuring and the use of calculus-based methods, the problem requires significant computational effort. The surface-component strain constraints result in structures with more mass than those without surface components.
Skating technique for the straights, based on the optimization of a simulation model.
Allinger, T L; Van den Bogert, A J
1997-02-01
Although experimental data have been collected to determine the skating techniques of the fastest skaters in the world, the "ideal" skating technique has not been determined (i.e., stroke time, glide time, push-off velocity, and push-off direction). The purpose of this study was to determine the skating technique that results in the fastest steady-state speed on a straight-away using optimization of a simulation model. A dynamic model of a skater was developed that included anatomical and physiological constraints: leg length, instantaneous power, and average power of a skater. Results from the model demonstrate that a number of skating techniques can be used to achieve the same steady-state speed. Increasing the average power output of a skater raises the top skating speed and decreases the range of optimal skating techniques. Increasing instantaneous power output (i.e., increasing isometric strength) increases the range of techniques a skater may use for a given speed. In the future, this model can be applied to individual skaters to determine if changes in technique or if improvements in power production are necessary to improve their steady-state skating speed. This model may be adapted to skating sports, such as speed skating, in-line skating, hockey, and cross-country skiing.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Feedback control for fuel-optimal descents using singular perturbation techniques
NASA Technical Reports Server (NTRS)
Price, D. B.
1984-01-01
In response to rising fuel costs and reduced profit margins for the airline companies, the optimization of the paths flown by transport aircraft has been considered. It was found that application of optimal control theory to the considered problem can result in savings in fuel, time, and direct operating costs. The best solution to the aircraft trajectory problem is an onboard real-time feedback control law. The present paper presents a technique which shows promise of becoming a part of a complete solution. The application of singular perturbation techniques to the problem is discussed, taking into account the benefits and some problems associated with them. A different technique for handling the descent part of a trajectory is also discussed.
Evolutionary techniques for sensor networks energy optimization in marine environmental monitoring
NASA Astrophysics Data System (ADS)
Grimaccia, Francesco; Johnstone, Ron; Mussetta, Marco; Pirisi, Andrea; Zich, Riccardo E.
2012-10-01
The sustainable management of coastal and offshore ecosystems, such as for example coral reef environments, requires the collection of accurate data across various temporal and spatial scales. Accordingly, monitoring systems are seen as central tools for ecosystem-based environmental management, helping on one hand to accurately describe the water column and substrate biophysical properties, and on the other hand to correctly steer sustainability policies by providing timely and useful information to decision-makers. A robust and intelligent sensor network that can adjust and be adapted to different and changing environmental or management demands would revolutionize our capacity to wove accurately model, predict, and manage human impacts on our coastal, marine, and other similar environments. In this paper advanced evolutionary techniques are applied to optimize the design of an innovative energy harvesting device for marine applications. The authors implement an enhanced technique in order to exploit in the most effective way the uniqueness and peculiarities of two classical optimization approaches, Particle Swarm Optimization and Genetic Algorithms. Here, this hybrid procedure is applied to a power buoy designed for marine environmental monitoring applications in order to optimize the recovered energy from sea-wave, by selecting the optimal device configuration.
Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour
Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad
2013-01-01
A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843
An Innovative Method of Teaching Electronic System Design with PSoC
ERIC Educational Resources Information Center
Ye, Zhaohui; Hua, Chengying
2012-01-01
Programmable system-on-chip (PSoC), which provides a microprocessor and programmable analog and digital peripheral functions in a single chip, is very convenient for mixed-signal electronic system design. This paper presents the experience of teaching contemporary mixed-signal electronic system design with PSoC in the Department of Automation,…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
... Relinquishment From Universal Safety Solution PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS.... AHRQ has accepted a notification of voluntary relinquishment from Universal Safety Solution PSO of its... the list of federally approved PSOs. AHRQ has accepted a notification from Universal Safety...
76 FR 60495 - Patient Safety Organizations: Voluntary Relinquishment From Illinois PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... from the Illinois PSO of its status as a Patient Safety Organization (PSO). The Patient Safety and... PSOs, which are entities or component organizations whose mission and primary activity is to...
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. Gary
1988-01-01
The finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory is investigated analytically. The approach yields fixed-finite-order controllers which are optimal with respect to high-order approximating finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for a one-dimensional SISO parabolic (heat/diffusion) system using a spline-based Ritz-Galerkin finite-element approximation. The numerical studies indicate convergence of the feedback gains with less than 2-percent performance degradation over full-order LQG controllers.
Liu Wei; Li Yupeng; Li Xiaoqiang; Cao Wenhua; Zhang Xiaodong
2012-06-15
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique's sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans' sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT's sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the
Lin, Chih-Hong
2016-09-01
Because the V-belt continuously variable transmission system spurred by permanent magnet (PM) synchronous motor has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the composite recurrent Laguerre orthogonal polynomials modified particle swarm optimization (PSO) neural network (NN) control system which has online learning capability to come back to the nonlinear and time-varying of system, is developed for controlling PM synchronous motor servo-driven V-belt continuously variable transmission system with the lumped nonlinear load disturbances. The composite recurrent Laguerre orthogonal polynomials NN control system consists of an inspector control, a recurrent Laguerre orthogonal polynomials NN control with adaptation law and a recouped control with estimation law. Moreover, the adaptation law of online parameters in the recurrent Laguerre orthogonal polynomials NN is originated from Lyapunov stability theorem. Additionally, two optimal learning rates of the parameters by means of modified PSO are posed in order to achieve better convergence. At last, comparative studies shown by experimental results are illustrated to demonstrate the control performance of the proposed control scheme.
Lin, Chih-Hong
2016-09-01
Because the V-belt continuously variable transmission system spurred by permanent magnet (PM) synchronous motor has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the composite recurrent Laguerre orthogonal polynomials modified particle swarm optimization (PSO) neural network (NN) control system which has online learning capability to come back to the nonlinear and time-varying of system, is developed for controlling PM synchronous motor servo-driven V-belt continuously variable transmission system with the lumped nonlinear load disturbances. The composite recurrent Laguerre orthogonal polynomials NN control system consists of an inspector control, a recurrent Laguerre orthogonal polynomials NN control with adaptation law and a recouped control with estimation law. Moreover, the adaptation law of online parameters in the recurrent Laguerre orthogonal polynomials NN is originated from Lyapunov stability theorem. Additionally, two optimal learning rates of the parameters by means of modified PSO are posed in order to achieve better convergence. At last, comparative studies shown by experimental results are illustrated to demonstrate the control performance of the proposed control scheme. PMID:27269193
Mestrovic, Ante . E-mail: amestrovic@bccancer.bc.ca; Clark, Brenda G.
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for different treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.
Optimization of brushless direct current motor design using an intelligent technique.
Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay
2015-07-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance.
Tong, S.S.; Powell, D.; Goel, S. GE Consulting Services, Albany, NY )
1992-02-01
A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs.
Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling
Todorov, E. I.
2007-03-21
The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.
A technique for calculating optimal Hohmann transfers with simultaneous plane and node rotations
NASA Astrophysics Data System (ADS)
Rogers, Christopher F.
This analysis presents eight nonlinear coupled equations whose solution provides sufficient information to characterize an optimal Hohmann transfer with simultaneous plane and node rotations. It also presents auxiliary equations which help provide other information of interest. The derivations utilize spherical geometry but otherwise deal very directly with the transfer geometry and thereby remain conceptually simple. The assumptions include initial and final circular orbits and impulsive burns. Numerical results illustrate the technique.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
The L_infinity constrained global optimal histogram equalization technique for real time imaging
NASA Astrophysics Data System (ADS)
Ren, Qiongwei; Niu, Yi; Liu, Lin; Jiao, Yang; Shi, Guangming
2015-08-01
Although the current imaging sensors can achieve 12 or higher precision, the current display devices and the commonly used digital image formats are still only 8 bits. This mismatch causes significant waste of the sensor precision and loss of information when storing and displaying the images. For better usage of the precision-budget, tone mapping operators have to be used to map the high-precision data into low-precision digital images adaptively. In this paper, the classic histogram equalization tone mapping operator is reexamined in the sense of optimization. We point out that the traditional histogram equalization technique and its variants are fundamentally improper by suffering from local optimum problems. To overcome this drawback, we remodel the histogram equalization tone mapping task based on graphic theory which achieves the global optimal solutions. Another advantage of the graphic-based modeling is that the tone-continuity is also modeled as a vital constraint in our approach which suppress the annoying boundary artifacts of the traditional approaches. In addition, we propose a novel dynamic programming technique to solve the histogram equalization problem in real time. Experimental results shows that the proposed tone-preserved global optimal histogram equalization technique outperforms the traditional approaches by exhibiting more subtle details in the foreground while preserving the smoothness of the background.
Optimization of liquid overlay technique to formulate heterogenic 3D co-cultures models.
Costa, Elisabete C; Gaspar, Vítor M; Coutinho, Paula; Correia, Ilídio J
2014-08-01
Three-dimensional (3D) cell culture models of solid tumors are currently having a tremendous impact in the in vitro screening of candidate anti-tumoral therapies. These 3D models provide more reliable results than those provided by standard 2D in vitro cell cultures. However, 3D manufacturing techniques need to be further optimized in order to increase the robustness of these models and provide data that can be properly correlated with the in vivo situation. Therefore, in the present study the parameters used for producing multicellular tumor spheroids (MCTS) by liquid overlay technique (LOT) were optimized in order to produce heterogeneous cellular agglomerates comprised of cancer cells and stromal cells, during long periods. Spheroids were produced under highly controlled conditions, namely: (i) agarose coatings; (ii) horizontal stirring, and (iii) a known initial cell number. The simultaneous optimization of these parameters promoted the assembly of 3D characteristic cellular organization similar to that found in the in vivo solid tumors. Such improvements in the LOT technique promoted the assembly of highly reproducible, individual 3D spheroids, with a low cost of production and that can be used for future in vitro drug screening assays.
Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner
2013-04-08
In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.
Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2015-12-01
To design a robust swimmer tracking system, we took into account two well-known tracking techniques: the nonlinear joint transform correlation (NL-JTC) and the color histogram. The two techniques perform comparably well, yet they both have substantial limitations. Interestingly, they also seem to show some complementarity. The correlation technique yields accurate detection but is sensitive to rotation, scale and contour deformation, whereas the color histogram technique is robust for rotation and contour deformation but shows low accuracy and is highly sensitive to luminosity and confusing background colors. These observations suggested the possibility of a dynamic fusion of the correlation plane and the color scores map. Before this fusion, two steps are required. First is the extraction of a sub-plane of correlation that describes the similarity between the reference and target images. This sub-plane has the same size as the color scores map but they have different interval values. Thus, the second step is required which is the normalization of the planes in the same interval so they can be fused. In order to determine the benefits of this fusion technique, first, we tested it on a synthetic image containing different forms with different colors. We thus were able to optimize the correlation plane and color histogram techniques before applying our fusion technique to real videos of swimmers in international competitions. Last, a comparative study of the dynamic fusion technique and the two classical techniques was carried out to demonstrate the efficacy of the proposed technique. The criteria of comparison were the tracking percentage, the peak to correlation energy (PCE), which evaluated the sharpness of the peak (accuracy), and the local standard deviation (Local-STD), which assessed the noise in the planes (robustness).
2012-01-01
Background Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations. This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Results Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. Conclusions In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the
Optimal Control for a Parallel Hybrid Hydraulic Excavator Using Particle Swarm Optimization
Wang, Dong-yun; Guan, Chen
2013-01-01
Optimal control using particle swarm optimization (PSO) is put forward in a parallel hybrid hydraulic excavator (PHHE). A power-train mathematical model of PHHE is illustrated along with the analysis of components' parameters. Then, the optimal control problem is addressed, and PSO algorithm is introduced to deal with this nonlinear optimal problem which contains lots of inequality/equality constraints. Then, the comparisons between the optimal control and rule-based one are made, and the results show that hybrids with the optimal control would increase fuel economy. Although PSO algorithm is off-line optimization, still it would bring performance benchmark for PHHE and also help have a deep insight into hybrid excavators. PMID:23818832
NASA Astrophysics Data System (ADS)
Azadi Moghaddam, Masoud; Kolahan, Farhad
2016-12-01
Face milling is an important and common machining operation because of its versatility and capability to produce various surfaces. Face milling is a machining process of removing material by the relative motion between a work piece and rotating cutter with multiple cutting edges. It is an interrupted cutting operation in which the teeth of the milling cutter enter and exit the work piece during each revolution. This paper is concerned with the experimental and numerical study of face milling of AISI1045. The proposed approach is based on statistical analysis on the experimental data gathered using Taguchi design matrix. Surface roughness is the most important performance characteristics of the face milling process. In this study the effect of input face milling process parameters on surface roughness of AISI1045 steel milled parts have been studied. The input parameters are cutting speed ( v), feed rate ( f z ) and depth of cut ( a p ). The experimental data are gathered using Taguchi L9 design matrix. In order to establish the relations between the input and the output parameters, various regression functions have been fitted on the data based on output characteristics. The significance of the process parameters on the quality characteristics of the process was also evaluated quantitatively using the analysis of variance method. Then, statistical analysis and validation experiments have been carried out to compare and select the best and most fitted models. In the last section of this research, mathematical model has been developed for surface roughness prediction using particle swarm optimization (PSO) on the basis of experimental results. The model developed for optimization has been validated by confirmation experiments. It has been found that the predicted roughness using PSO is in good agreement with the actual surface roughness.
Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques
Hernandez, V. Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.
2015-04-15
Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.
Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong
2012-04-01
This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.
Delahaye, P; Galatà, A; Angot, J; Cam, J F; Traykov, E; Ban, G; Celona, L; Choinski, J; Gmaj, P; Jardin, P; Koivisto, H; Kolhinen, V; Lamy, T; Maunoury, L; Patti, G; Thuillier, T; Tarvainen, O; Vondrasek, R; Wenander, F
2016-02-01
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here. PMID:26932063
NASA Astrophysics Data System (ADS)
Delahaye, P.; Galatà, A.; Angot, J.; Cam, J. F.; Traykov, E.; Ban, G.; Celona, L.; Choinski, J.; Gmaj, P.; Jardin, P.; Koivisto, H.; Kolhinen, V.; Lamy, T.; Maunoury, L.; Patti, G.; Thuillier, T.; Tarvainen, O.; Vondrasek, R.; Wenander, F.
2016-02-01
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here.
Optimization of image acquisition techniques for dual-energy imaging of the chest
Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.
2007-10-15
Experimental and theoretical studies were conducted to determine optimal acquisition techniques for a prototype dual-energy (DE) chest imaging system. Technique factors investigated included the selection of added x-ray filtration, kVp pair, and the allocation of dose between low- and high-energy projections, with total dose equal to or less than that of a conventional chest radiograph. Optima were computed to maximize lung nodule detectability as characterized by the signal-difference-to-noise ratio (SDNR) in DE chest images. Optimal beam filtration was determined by cascaded systems analysis of DE image SDNR for filter selections across the periodic table (Z{sub filter}=1-92), demonstrating the importance of differential filtration between low- and high-kVp projections and suggesting optimal high-kVp filters in the range Z{sub filter}=25-50. For example, added filtration of {approx}2.1 mm Cu, {approx}1.2 mm Zr, {approx}0.7 mm Mo, and {approx}0.6 mm Ag to the high-kVp beam provided optimal (and nearly equivalent) soft-tissue SDNR. Optimal kVp pair and dose allocation were investigated using a chest phantom presenting simulated lung nodules and ribs for thin, average, and thick body habitus. Low- and high-energy techniques ranged from 60-90 kVp and 120-150 kVp, respectively, with peak soft-tissue SDNR achieved at [60/120] kVp for all patient thicknesses and all levels of imaging dose. A strong dependence on the kVp of the low-energy projection was observed. Optimal allocation of dose between low- and high-energy projections was such that {approx}30% of the total dose was delivered by the low-kVp projection, exhibiting a fairly weak dependence on kVp pair and dose. The results have guided the implementation of a prototype DE imaging system for imaging trials in early-stage lung nodule detection and diagnosis.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality.
On large-scale nonlinear programming techniques for solving optimal control problems
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
Integration of ab-initio nuclear calculation with derivative free optimization technique
Sharda, Anurag
2008-01-01
Optimization techniques are finding their inroads into the field of nuclear physics calculations where the objective functions are very complex and computationally intensive. A vast space of parameters needs searching to obtain a good match between theoretical (computed) and experimental observables, such as energy levels and spectra. Manual calculation defies the scope of such complex calculation and are prone to error at the same time. This body of work attempts to formulate a design and implement it which would integrate the ab initio nuclear physics code MFDn and the VTDIRECT95 code. VTDIRECT95 is a Fortran95 suite of parallel code implementing the derivative-free optimization algorithm DIRECT. Proposed design is implemented for a serial and parallel version of the optimization technique. Experiment with the initial implementation of the design showing good matches for several single-nucleus cases are conducted. Determination and assignment of appropriate number of processors for parallel integration code is implemented to increase the efficiency and resource utilization in the case of multiple nuclei parameter search.
Artificial intelligent techniques for optimizing water allocation in a reservoir watershed
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
2014-05-01
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Identifying Ensembles of Signal Transduction Models using Pareto Optimal Ensemble Techniques (POETs)
Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D.
2010-01-01
Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass action kinetics within an ordinary differential equation (ODE) framework (64-ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information. PMID:20665647
Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).
Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D
2010-07-01
Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.
Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.
Cho, Soobum; Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.
Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink
Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376
Yan, Ming; Wei, Ying-chun; Li, Xue-feng; Meng, Jin; Wu, Yun; Xiao, Wei
2015-10-01
The theoretical basis of the alcohol precipitation process control was provided, the alcohol precipitation was optimized and the relationship equation was got. The monod glycoside, loganin and paeoniflorin were used as the evaluation indexes to determine the impact factors of alcohol precipitation techniques of Liuwei Dihuang decoction by the Plackett-Burman experimental design and the levels of non-significant factors were identified. Then, Box-Behnken response surface methodology was used to research and discuss the critical process parameters influence the effect of alcohol precipitation and draw interaction between key process parameters and the correlation equation with index components. Through the establishment and solving the quadratic regression model of composite score, the optimum preparation conditions of alcohol precipitation techniques of liuwei were as follows: stirring speed was 580 r x min(-1), standing time was 17 hours, alcohol concentration was 34%, the density of Liuwei Dihuang decoction was 1.13. The response surface methodology for optimized alcohol precipitation techniques of Liuwei Dihuang decoction is. reasonable and feasible.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1991-01-01
Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.
NASA Astrophysics Data System (ADS)
Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.
2009-11-01
In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
NASA Astrophysics Data System (ADS)
Liu, Hanli; Pei, Tao; Zhou, Chenghu; Zhu, A.-Xing
2008-12-01
In order to enhance the spectral characteristics of features for clustering, in the experiment of wetland extraction in Sanjiang Plain, we use a series of approaches in preprocessing of the MODIS remote sensing data by considering eliminating interference caused by other features. First, by analysis of the spectral characteristics of data, we choose a set of multi-temporal and multi-spectral MODIS data in Sanjiang Plain for clustering. By building and applying mask, the water areas and woodland vegetation can be eliminated from the image data. Second, by Enhanced Lee filtering and Minimum Noise Fraction (MNF) transformation, the data can be denoised and the characteristics of wetland can be enhanced obviously. After the preprocessing of data, the fuzzy c-means clustering algorithm optimized by particle swarm algorithm (PSO-FCM) is utilized on the image data for the wetland extraction. The result of experiment shows that the accuracy of wetland extraction by means of PSO-FCM algorithm is reasonable and effective.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2012-04-01
Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.
Wang, Shu-tao; Chen, Dong-ying; Wang, Xing-long; Wei, Meng; Wang, Zhi-fang
2015-12-01
In this paper, fluorescence spectra properties of potassium sorbate in aqueous solution and orange juice are studied, and the result.shows that in two solution there are many difference in fluorescence spectra of potassium sorbate, but the fluorescence characteristic peak exists in λ(ex)/λ(em) = 375/490 nm. It can be seen from the two dimensional fluorescence spectra that the relationship between the fluorescence intensity and the concentration of potassium sorbate is very complex, so there is no linear relationship between them. To determine the concentration of potassium sorbate in orange juice, a new method combining Particle Swarm Optimization (PSO) algorithm with Back Propagation (BP) neural network is proposed. The relative error of two predicted concentrations is 1.83% and 1.53% respectively, which indicate that the method is feasible. The PSO-BP neural network can accurately measure the concentration of potassium sorbate in orange juice in the range of 0.1-2.0 g · L⁻¹. PMID:26964248
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
A soft self-repairing for FBG sensor network in SHM system based on PSO-SVR model reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Wang, Peng; Liang, Dakai; Fan, Chunfeng; Li, Cailing
2015-05-01
Structural health monitoring (SHM) system takes advantage of an array of sensors to continuously monitor a structure and provide an early prediction such as the damage position and damage degree etc. Such a system requires monitoring the structure in any conditions including bad condition. Therefore, it must be robust and survivable, even has the self-repairing ability. In this study, a model reconstruction predicting algorithm based on particle swarm optimization-support vector regression (PSO-SVR) is proposed to achieve the self-repairing of the Fiber Bragg Grating (FBG) sensor network in SHM system. Furthermore, an eight-point FBG sensor SHM system is experimented in an aircraft wing box. For the damage loading position prediction on the aircraft wing box, six kinds of disabled modes are experimentally studied to verify the self-repairing ability of the FBG sensor network in the SHM system, and the predicting performance are compared with non-reconstruction based on PSO-SVR model. The research results indicate that the model reconstruction algorithm has more excellence than that of non-reconstruction model, if partial sensors are invalid in the FBG-based SHM system, the predicting performance of the model reconstruction algorithm is almost consistent with that no sensor is invalid in the SHM system. In this way, the self-repairing ability of the FBG sensor is achieved for the SHM system, such the reliability and survivability of the FBG-based SHM system is enhanced if partial FBG sensors are invalid.
a High-Level Technique for Estimation and Optimization of Leakage Power for Full Adder
NASA Astrophysics Data System (ADS)
Shrivas, Jayram; Akashe, Shyam; Tiwari, Nitesh
2013-04-01
Optimization of power is a very important issue in low-voltage and low-power application. In this paper, we have proposed power gating technique to reduce leakage current and leakage power of one-bit full adder. In this power gating technique, we use two sleep transistors i.e., PMOS and NMOS. PMOS sleep transistor is inserted between power supply and pull up network. And NMOS sleep transistor is inserted between pull down network and ground terminal. These sleep transistors (PMOS and NMOS) are turned on when the circuit is working in active mode. And sleep transistors (PMOS and NMOS) are turned off when circuit is working in standby mode. We have simulated one-bit full adder and compared with the power gating technique using cadence virtuoso tool in 45 nm technology at 0.7 V at 27°C. By applying this technique, we have reduced leakage current from 2.935 pA to 1.905 pA and leakage power from 25.04μw to 9.233μw. By using this technique, we have reduced leakage power up to 63.12%.
NASA Technical Reports Server (NTRS)
Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.
1990-01-01
A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.
1992-01-01
The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
A model based technique for the design of flight directors. [optimal control models
NASA Technical Reports Server (NTRS)
Levison, W. H.
1973-01-01
A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.
Design and optimization of stepped austempered ductile iron using characterization techniques
Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.
2013-09-15
Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.
Cheng, Zhengjun; Zhang, Yuntao; Zhou, Changhong; Zhang, Wenjun; Gao, Shibo
2009-01-01
In the present work, the support vector machine (SVM) and Adaboost-SVM have been used to develop a classification model as a potential screening mechanism for a novel series of 5-HT1A selective ligands. Each compound is represented by calculated structural descriptors that encode topological features. The particle swarm optimization (PSO) and the stepwise multiple linear regression (Stepwise-MLR) methods have been used to search descriptor space and select the descriptors which are responsible for the inhibitory activity of these compounds. The model containing seven descriptors found by Adaboost-SVM, has showed better predictive capability than the other models. The total accuracy in prediction for the training and test set is 100.0% and 95.0% for PSO-Adaboost-SVM, 99.1% and 92.5% for PSO-SVM, 99.1% and 82.5% for Stepwise-MLR-Adaboost-SVM, 99.1% and 77.5% for Stepwise-MLR-SVM, respectively. The results indicate that Adaboost-SVM can be used as a useful modeling tool for QSAR studies. PMID:20111683
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary Relinquishment From Morgridge Institute for Research PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ... Safety Organizations (PSOs), which collect, aggregate, and analyze confidential information regarding...
Dinan, T.M.
1984-01-01
The objectives of this study were to: (1) determine how energy efficiency affects the resale value of homes; (2) use this information concerning the implicit price of energy efficiency to estimate the resale value of fuel saving investments; and (3) incorporate these resale values into the investment decision process and determine the efficient investment mix for a household planning to own a given home for three alternative time periods. Two models were used to accomplish these objectives. A hedonic price model was used to determine the impact of energy efficiency on housing prices. The hedonic technique is a method used to attach implicit prices to characteristics that are not themselves bought and sold in markets, but are components of market goods. The hedonic model in this study provided an estimate of the implicit price paid for an increase in energy efficiency in homes on the Des-Moines housing market. In order to determine how the length of time the home is to be owned affects the optimal investment mix, a linear programming model was used to determine the cost minimizing investment mix for a baseline house under the assumption that it would be owned for 6, 20, and 50 years, alternatively. The results of the hedonic technique revealed that a premium is paid for energy efficient homes in Des Moines. The results of the linear programming model reveal that the optimal fuel saving investment mix for a home is sensitive to the time the home is to be owned.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni
2010-06-01
The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.
Highly sensitive focus monitoring technique based on illumination and target co-optimization
NASA Astrophysics Data System (ADS)
Lee, Myungjun; Smith, Mark D.; Subrahmanyan, Pradeep; Levy, Ady
2016-03-01
We present a cost-effective focus monitoring technique based on the illumination and the target co-optimization. An advanced immersion scanner can provide the freeform illumination that enables the use of any kind of custom source shape by using a programmable array of thousands of individually adjustable micro-mirrors. Therefore, one can produce non-telecentricity using the asymmetric illumination in the scanner with the optimized focus target on the cost-effective binary OMOG mask. Then, the scanner focus variations directly translate into easily measurable overlay shifts in the printed pattern with high sensitivity (ΔShift/Δfocus = 60nm/100nm). In addition, the capability of using the freeform illumination allows us to computationally co-optimize the source and the focus target, simultaneously, generating not only vertical or horizontal shifts, but also introducing diagonal pattern shifts. The focus-induced pattern shifts can be accurately measured by standard wafer metrology tools such as CD-SEM and overlay metrology tools.
NASA Astrophysics Data System (ADS)
Andrachek, R. G.; Abbey, D.; James, S. C.; Zhang, B.; Gabriel, C.; Martin, P.; Arnold, B. W.; Woessner, W. W.
2009-12-01
Groundwater flow predictions in complex geologic environments require the use of models that represent all potentially important physical features. Oversimplification of the flow system may neglect features that are important to understanding the range of possible flow predictions. FEFLOW was used to develop an equivalent porous medium model representing all of the key features of the conceptual model describing a folded and faulted fractured rock environment located in Southern California, USA. FEFLOW allows dipping layers with layer parallel anisotropy, vertical faults, and decreasing hydraulic conductivity with depth. The variably-saturated flow model consists of 46 layers and 1,245 parameters supported by 302 point observations of heads, head differences, pumping rates and seepage flow. The flow solution was optimized using Singular Value Decomposition and Tikhonov Regularization techniques implemented in the SVD-Assist tool within PEST. Optimization improved the model fit significantly; the objective function was reduced by 80% from its initial value. The optimized model retains sufficient parameter detail to predict flow directions. Moreover, the analysis can be extended to facilitate a probability-based assessment of flow prediction uncertainty.
Reducing the impact of a desalination plant using stochastic modeling and optimization techniques
NASA Astrophysics Data System (ADS)
Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François
2009-02-01
SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer
Fractional order fuzzy control of hybrid power system with renewable generation using chaotic PSO.
Pan, Indranil; Das, Saptarshi
2016-05-01
This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants. PMID:25816968
Fractional order fuzzy control of hybrid power system with renewable generation using chaotic PSO.
Pan, Indranil; Das, Saptarshi
2016-05-01
This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants.
Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih
2016-01-01
either DCA or IMRS plans, at 9.2 ± 7% and 8.2 ± 6%, respectively. Owing to the multiple arc or beam planning designs of IMRS and VMAT, both of these techniques required higher MU delivery than DCA, with the averages being twice as high (p < 0.05). If linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem. PMID:27396940
Good techniques optimize control of oil-based mud and solids
Phelps, J.; Hoopingarner, J.
1989-02-13
Effective techniques have been developed from work on dozens of North Sea Wells to minimize the amount of oil-based mud discharged to the sea while maintaining acceptable levels of solids. Pressure to reduce pollution during the course of drilling prompted the development of these techniques. They involve personnel and optimization of mud system and procedures. Case histories demonstrate that regulations may be met with economical techniques using existing technology. The benefits of low solids content are widely known, and are a key part of any successful mud program. Good solids control should result in lower mud costs and better drilling performance. Operators have specified high-performance shakers to accomplish this and have revised their mud programs with lower and lower allowable drilled solids percentages. This will pay off in certain areas. But with the U.K. Department of Energy regulations requiring cuttings oil discharge content (CODC) to be less than 150 g of oil/kg of dry solids discharge that went into effect Jan. 1, 1989, oil-loss control has a higher profile in the U.K. sector of the North Sea.
Dose reduction in a paediatric X-ray department following optimization of radiographic technique.
Mooney, R; Thomas, P S
1998-08-01
A survey of radiation doses to children from diagnostic radiography has been carried out in a dedicated paediatric X-ray room. Entrance surface dose (ESD) and dose-area product (DAP) per radiograph were simultaneously measured with thermoluminescent dosemeters (TLDs) and a DAP meter to provide mean dose values for separate age ranges. Results of ESD and DAP were lower than the mean values from other UK studies for all ages and radiographs, except for the infant pelvis AP radiograph. Comparison of ESD and radiographic technique with CEC quality criteria highlighted a need for reduction of dose to infants and implied an increase in tube filtration might overcome the limitations of the room's three-phase, 12-pulse generator, allowing higher tube potentials to be used on infants. Additional tube filtration of 3 mmA1 was installed following assessment of dose reduction and image quality with test objects and phantoms, and confirmation from the paediatric radiologist that clinical image quality was not-significantly altered. The tube potential was increased from 50 to 56 kVp for the infant pelvis AP radiograph. The resulting ESD and effective dose fell by 51% and 38%, respectively. The CEC quality criteria have proved useful as a benchmark against which technique in X-ray departments can be compared, and as such are a useful tool for optimizing radiographic technique and reducing patient dose.
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising
Recursive Ant Colony Global Optimization: a new technique for the inversion of geophysical data
NASA Astrophysics Data System (ADS)
Gupta, D. K.; Gupta, J. P.; Arora, Y.; Singh, U. K.
2011-12-01
We present a new method called Recursive Ant Colony Global Optimization (RACO) technique, a modified form of general ACO, which can be used to find the best solutions to inversion problems in geophysics. RACO simulates the social behaviour of ants to find the best path between the nest and the food source. A new term depth has been introduced, which controls the extent of recursion. A selective number of cities get qualified for the successive depth. The results of one depth are used to construct the models for the next depth and the range of values for each of the parameters is reduced without any change to the number of models. The three additional steps performed after each depth, are the pheromone tracking, pheromone updating and city selection. One of the advantages of RACO over ACO is that if a problem has multiple solutions, then pheromone accumulation will take place at more than one city thereby leading to formation of multiple nested ACO loops within the ACO loop of the previous depth. Also, while the convergence of ACO is almost linear, RACO shows exponential convergence and hence is faster than the ACO. RACO proves better over some other global optimization techniques, as it does not require any initial values to be assigned to the parameters function. The method has been tested on some mathematical functions, synthetic self-potential (SP) and synthetic gravity data. The obtained results reveal the efficiency and practicability of the method. The method is found to be efficient enough to solve the problems of SP and gravity anomalies due to a horizontal cylinder, a sphere, an inclined sheet and multiple idealized bodies buried inside the earth. These anomalies with and without noise were inverted using the RACO algorithm. The obtained results were compared with those obtained from the conventional methods and it was found that RACO results are more accurate. Finally this optimization technique was applied to real field data collected over the Surda
Si, Lei; Wang, Zhongbin; Yang, Yinwei
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Yang, Yinwei; Zhang, Lin
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2016-04-21
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.
NASA Astrophysics Data System (ADS)
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi
2016-04-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2016-04-21
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
NASA Astrophysics Data System (ADS)
Wu, Z.; Gao, Y.; Gong, H.; Li, L.
2016-04-01
Lacking of efficient methods, industry currently uses one only parameter—fuel flow rate—to evaluate the nozzle quality, which is far from satisfying the current emission regulations worldwide. By utilizing synchrotron radiation high energy X-ray in Shanghai Synchrotron Radiation Facility (SSRF), together with the imaging techniques, the 3D models of two nozzles with the same design dimensions were established, and the influence of parameters fluctuation in the azimuthal direction were analyzed in detail. Results indicate that, due to the orifice misalignment, even with the same design dimension, the inlet rounding radius of orifices differs greatly, and its fluctuation in azimuthal direction is also large. This difference will cause variation in the flow characteristics at orifice outlet and then further affect the spray characteristics. The study also indicates that, more precise investigation and insight into the evaluation and optimization of diesel nozzle structural parameter are needed.
Optimization of a wood dryer kiln using the mixed integer programming technique: A case study
Gustafsson, S.I.
1999-07-01
When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.
Reddy, Raghu M; Guntupalli, Kalpalatha K
2007-01-01
Chronic obstructive pulmonary disease (COPD) is a major global healthcare problem. Studies vary widely in the reported frequency of mechanical ventilation in acute exacerbations of COPD. Invasive intubation and mechanical ventilation may be associated with significant morbidity and mortality. A good understanding of the airway pathophysiology and lung mechanics in COPD is necessary to appropriately manage acute exacerbations and respiratory failure. The basic pathophysiology in COPD exacerbation is the critical expiratory airflow limitation with consequent dynamic hyperinflation. These changes lead to further derangement in ventilatory mechanics, muscle function and gas exchange which may result in respiratory failure. This review discusses the altered respiratory mechanics in COPD, ways to detect these changes in a ventilated patient and formulating ventilatory techniques to optimize management of respiratory failure due to exacerbation of COPD. PMID:18268918
Lucero, V.; Meale, B.M.; Purser, F.E.
1990-01-01
The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.
Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Yoo, Seung Yeun
2011-01-01
Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.
NASA Astrophysics Data System (ADS)
Galanis, George; Famelis, Ioannis; Kalogeri, Christina
2014-10-01
The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.
Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data
NASA Astrophysics Data System (ADS)
Ye, Shirong; Xia, Pengfei; Cai, Changsheng
2016-09-01
The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Accart, Nathalie; Sergi, Florinda; Rooke, Ronald
2014-09-01
Organ-specific cell types are maintained by tissue homeostasis and may vary in nature and/or frequency in pathological situations. Moreover, within a cell lineage, some sub-populations, defined by combinations of cell-surface markers, may have specific functions. Dendritic cells are the epitome of such a population as they may be subdivided into discrete sub-groups with defined functions in specific compartments of various organs. Technically, to study the distribution of DC sub-populations, it involves performing multiparametric immunofluorescence on well-conserved organ structures. However, immunodetection may be impacted by protein cross-linking and antigenic epitope masking by the use of 10% neutral-buffered formalin. To circumvent this and to preserve a good morphological tissue structure, we evaluated alternative fixatives such as Periodate Lysine Paraformaldehyde or Tris Zinc fixatives in combination with other embedding techniques. The cryosection protocols were adapted for optimal antigen detection but offered a poor morphological preservation. We therefore developed a new methodology based on Tris Zinc fixative, gelatin-sucrose embedding and freezing. Using multiple DC markers, we demonstrate that this treatment is an optimal protocol for cell-surface marker detection on high-quality tissue sections. PMID:24874853
Optimization of electrospinning techniques for the realization of nanofiber plastic lasers
NASA Astrophysics Data System (ADS)
Persano, L.; Moffa, M.; Fasano, V.; Montinaro, M.; Morello, G.; Resta, V.; Spadaro, D.; Gucciardi, P. G.; Maragò, O. M.; Camposeo, A.; Pisignano, D.
2016-02-01
Electrospinning technologies for the realization of active polymeric nanomaterials can be easily up-scaled, opening perspectives to industrial exploitation, and due to their versatility they can be employed to finely tailor the size, morphology and macroscopic assembly of fibers as well as their functional properties. Light-emitting or other active polymer nanofibers, made of conjugated polymers or of blends embedding chromophores or other functional dopants, are suitable for various applications in advanced photonics and sensing technologies. In particular, their almost onedimensional geometry and finely tunable composition make them interesting materials for developing novel lasing devices. However, electrospinning techniques rely on a large variety of parameters and possible experimental geometries, and they need to be carefully optimized in order to obtain suitable topographical and photonic properties in the resulting nanostructures. Targeted features include smooth and uniform fiber surface, dimensional control, as well as filament alignment, enhanced light emission, and stimulated emission. We here present various optimization strategies for electrospinning methods which have been implemented and developed by us for the realization of lasing architectures based on polymer nanofibers. The geometry of the resulting nanowires leads to peculiar light-scattering from spun filaments, and to controllable lasing characteristics.
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.
2015-01-01
It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248
Improving the performance of mass-consistent numerical models using optimization techniques
Barnard, J.C.; Wegley, H.L.; Hiester, T.R.
1985-09-01
This report describes a technique of using a mass-consistent model to derive wind speeds over a microscale region of complex terrain. A serious limitation in the use of these numerical models is that the calculated wind field is highly sensitive to some input parameters, such as those specifying atmospheric stability. Because accurate values for these parameters are not usually known, confidence in the calculated winds is low. However, values for these parameters can be found by tuning the model to existing wind observations within a microscale area. This tuning is accomplished by using a single-variable, unconstrained optimization procedure that adjusts the unknown parameters so that the error between the observed winds and model calculations of these winds is minimized. Model verification is accomplished by using eight sets of hourly averaged wind data. These data are obtained from measurements made at approximately 30 sites covering a wind farm development in the Altamont Pass area. When the model is tuned to a small subset of the 30 sites, an accurate determination of the wind speeds was made for the remaining sites in six of the eight cases. (The two that failed were low wind speed cases.) Therefore, when this technique is used, numerical modeling shows great promise as a tool for microscale siting of wind turbines in complex terrain.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Flow and Mixture Optimization for a Fuel Stratification Engine Using PIV and PLIF Techniques
NASA Astrophysics Data System (ADS)
Li, Y.; Zhao, H.; Ma, T.
2006-07-01
This paper describes an application of PIV (particle image velocimetry) and two-tracer PLIF (planar laser-induced florescence) techniques to optimize the in-cylinder flow and to visualize two fuels distribution simultaneously for developing a fuel stratification engine. This research was carried out on a twin-spark four-valve SI engine. The PIV measurement results shows that a strong tumbling flow was produced in the cylinder as the intake valves were shrouded. The flow exhibited a symmetrical distribution in the plane perpendicular to the cylinder axis from the early stage of intake until the late stage of compression. This flow pattern helps to stratify the two fuels introduced from separate ports into two regions laterally. The stratification of fuels was observed visually by the two-tracer PLIF technique. During the PLIF measurement, two tracers, 3- pentanone and N, N-dimethylaniline (DMA), were doped into two fuels, hexane and iso-octane, respectively. Their fluorescence emissions were separated by two optical band-pass filters and recorded by a single ICCD camera simultaneously via an image doubling system. The PLIF measurement result shows that two fuels were well stratified.
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
Particle Swarm Optimization with Scale-Free Interactions
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
Optimized fractional cloudiness determination from five ground-based remote sensing techniques
Boers, R.; de Haij, M. J.; Wauben, W.M.F.; Baltink, Henk K.; van Ulft, L. H.; Savenije, M.; Long, Charles N.
2010-12-23
A one-year record of fractional cloudiness at 10 minute intervals was generated for the Cabauw Experimental Site for Atmospheric Research [CESAR] (51°58’N, 4° 55’E) using an integrated assessment of five different observational methods. The five methods are based on active as well as passive systems and use either a hemispheric or column remote sensing technique. The one-year instrumental cloudiness data were compared against a 30 year climatology of Observer data in the vicinity of CESAR [1971- 2000]. In the intermediate 2 - 6 octa range, most instruments, but especially the column methods, report lower frequency of occurrence of cloudiness than the absolute minimum values from the 30 year Observer climatology. At night, the Observer records less clouds in the 1, 2 octa range than during the day, while the instruments registered more clouds. During daytime the Observer also records much more 7 octa cloudiness than the instruments. One column method combining a radar with a lidar outstrips all other techniques in recording cloudiness, even up to height in excess of 9 km. This is mostly due to the high sensitivity of the radar that is used in the technique. A reference algorithm was designed to derive a continuous and optimized record of fractional cloudiness. Output from individual instruments were weighted according to the cloud base height reported at the observation time; the larger the height, the lower the weight. The algorithm was able to provide fractional cloudiness observations every 10 minutes for 98% of the total period of 12 months [15 May 2008 - 14 May 2009].
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Caproni, A.; Toffoli, R. T.; Monteiro, H.; Abraham, Z.; Teixeira, D. M.
2011-07-20
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N{sub s} elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e.g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting
NASA Astrophysics Data System (ADS)
Palma, Giuseppe; Bia, Pietro; Mescia, Luciano; Yano, Tetsuji; Nazabal, Virginie; Taguchi, Jun; Moréac, Alain; Prudenzano, Francesco
2014-07-01
A mid-IR amplifier consisting of a tapered chalcogenide fiber coupled to an Er-doped chalcogenide microsphere has been optimized via a particle swarm optimization (PSO) approach. More precisely, a dedicated three-dimensional numerical model, based on the coupled mode theory and solving the rate equations, has been integrated with the PSO procedure. The rate equations have included the main transitions among the erbium energy levels, the amplified spontaneous emission, and the most important secondary transitions pertaining to the ion-ion interactions. The PSO has allowed the optimal choice of the microsphere and fiber radius, taper angle, and fiber-microsphere gap in order to maximize the amplifier gain. The taper angle and the fiber-microsphere gap have been optimized to efficiently inject into the microsphere both the pump and the signal beams and to improve their spatial overlapping with the rare-earth-doped region. The employment of the PSO approach shows different attractive features, especially when many parameters have to be optimized. The numerical results demonstrate the effectiveness of the proposed approach for the design of amplifying systems. The PSO-based optimization approach has allowed the design of a microsphere-based amplifying system more efficient than a similar device designed by using a deterministic optimization method. In fact, the amplifier designed via the PSO exhibits a simulated gain G=33.7 dB, which is higher than the gain G=6.9 dB of the amplifier designed via the deterministic method.
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
Shah, Chirag; Vicini, Frank A.
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W
2016-01-01
Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies.
NASA Astrophysics Data System (ADS)
McNally-Heintzelman, Karen M.; Dawes, Judith M.; Lauto, Antonio; Parker, Anthony E.; Owen, Earl R.; Piper, James A.
1998-01-01
. This study demonstrates the feasibility of the laser-solder repair technique for nerve anastomosis resulting in improved tensile strength. The welding temperature required to achieve optimal tensile strength has been identified.
Pungartnik, Cristina; Picada, Jaqueline; Brendel, Martin; Henriques, João A P
2002-03-31
The sensitivity responses of seven pso mutants of Saccharomyces cerevisiae towards the mutagens N-nitrosodiethylamine (NDEA), 1,2:7,8-diepoxyoctane (DEO), and 8-hydroxyquinoline (8HQ) further substantiated their allocation into two distinct groups: genes PSO1 (allelic to REV3), PSO2 (SNM1), PSO4 (PRP19), and PSO5 (RAD16) constitute one group in that they are involved in repair of damaged DNA or in RNA processing whereas genes PSO6 (ERG3) and PSO7 (COX11) are related to metabolic steps protecting from oxidative stress and thus form a second group, not responsible for DNA repair. PSO3 has not yet been molecularly characterized but its pleiotropic phenotype would allow its integration into either group. The first three PSO genes of the DNA repair group and PSO3, apart from being sensitive to photo-activated psoralens, have another common phenotype: they are also involved in error-prone DNA repair. While all mutants of the DNA repair group and pso3 were sensitive to DEO and NDEA the pso6 mutant revealed WT or near WT resistance to these mutagens. As expected, the repair-proficient pso7-1 and cox11-Delta mutant alleles conferred high sensitivity to NDEA, a chemical known to be metabolized via redox cycling that yields hydroxylamine radicals and reactive oxygen species. All pso mutants exhibited some sensitivity to 8HQ and again pso7-1 and cox11-Delta conferred the highest sensitivity to this drug. Double mutant snm1-Delta cox11-Delta exhibited additivity of 8HQ and NDEA sensitivities of the single mutants, indicating that two different repair/recovery systems are involved in survival. DEO sensitivity of the double mutant was equal or less than that of the single snm1-Delta mutant. In order to determine if there was oxidative damage to nucleotide bases by these drugs we employed an established bacterial test with and without metabolic activation. After S9-mix biotransformation, NDEA and to a lesser extent 8HQ, lead to significantly higher mutagenesis in an Escherichia
NASA Technical Reports Server (NTRS)
MacKay, Rebecca A.; Locci, Ivan E.; Garg, anita; Ritzert, Frank J.
2002-01-01
is a three-phase constituent composed of TCP and stringers of gamma phase in a matrix of gamma prime. An incoherent grain boundary separates the SRZ from the gammagamma prime microstructure of the superalloy. The SRZ is believed to form as a result of local chemistry changes in the superalloy due to the application of the diffusion aluminide bondcoat. Locally high surface stresses also appear to promote the formation of the SRZ. Thus, techniques that change the local alloy chemistry or reduce surface stresses have been examined for their effectiveness in reducing SRZ. These SRZ-reduction steps are performed on the test specimen or the turbine blade before the bondcoat is applied. Stressrelief heat treatments developed at NASA Glenn have been demonstrated to reduce significantly the amount of SRZ that develops during subsequent high-temperature exposures. Stress-relief heat treatments reduce surface stresses by recrystallizing a thin surface layer of the superalloy. However, in alloys with very high propensities to form SRZ, stress relief heat treatments alone do not eliminate SRZ entirely. Thus, techniques that modify the local chemistry under the bondcoat have been emphasized and optimized successfully at Glenn. One such technique is carburization, which changes the local chemistry by forming submicron carbides near the surface of the superalloy. Detailed characterizations have demonstrated that the depth and uniform distribution of these carbides are enhanced when a stress relief treatment and an appropriate surface preparation are employed in advance of the carburization treatment. Even in alloys that have the propensity to develop a continuous SRZ layer beneath the diffusion zone, the SRZ has been completely eliminated or reduced to low, manageable levels when this combination of techniques is utilized. Now that the techniques to mitigate SRZ have been established at Glenn, TCP phase formation is being emphasized in ongoing work under the UEET Program. The
Parameter tuning of PVD process based on artificial intelligence technique
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In this study, an artificial intelligence technique is proposed to be implemented in the parameter tuning of a PVD process. Due to its previous adaptation in similar optimization problems, genetic algorithm (GA) is selected to optimize the parameter tuning of the RF magnetron sputtering process. The most optimized parameter combination obtained from GA's optimization result is expected to produce the desirable zinc oxide (ZnO) thin film from the sputtering process. The parameters involved in this study were RF power, deposition time and substrate temperature. The algorithm was tested to optimize the 25 datasets of parameter combinations. The results from the computational experiment were then compared with the actual result from the laboratory experiment. Based on the comparison, GA had shown that the algorithm was reliable to optimize the parameter combination before the parameter tuning could be done to the RF magnetron sputtering machine. In order to verify the result of GA, the algorithm was also been compared to other well known optimization algorithms, which were, particle swarm optimization (PSO) and gravitational search algorithm (GSA). The results had shown that GA was reliable in solving this RF magnetron sputtering process parameter tuning problem. GA had shown better accuracy in the optimization based on the fitness evaluation.
Hedegaard, R.F.; Ho, J.; Eisert, J.
1996-12-31
Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.
NASA Astrophysics Data System (ADS)
Tsampas, P.; Roditis, G.; Papadimitriou, V.; Chatzakos, P.; Gan, Tat-Hean
2013-05-01
Increasing demand in mobile, autonomous devices has made energy harvesting a particular point of interest. Systems that can be powered up by a few hundreds of microwatts could feature their own energy extraction module. Energy can be harvested from the environment close to the device. Particularly, the ambient mechanical vibrations conversion via piezoelectric transducers is one of the most investigated fields for energy harvesting. A technique for optimized energy harvesting using piezoelectric actuators called "Synchronized Switching Harvesting" is explored. Comparing to a typical full bridge rectifier, the proposed harvesting technique can highly improve harvesting efficiency, even in a significantly extended frequency window around the piezoelectric actuator's resonance. In this paper, the concept of design, theoretical analysis, modeling, implementation and experimental results using CEDRAT's APA 400M-MD piezoelectric actuator are presented in detail. Moreover, we suggest design guidelines for optimum selection of the storage unit in direct relation to the characteristics of the random vibrations. From a practical aspect, the harvesting unit is based on dedicated electronics that continuously sense the charge level of the actuator's piezoelectric element. When the charge is sensed, to come to a maximum, it is directed to speedily flow into a storage unit. Special care is taken so that electronics operate at low voltages consuming a very small amount of the energy stored. The final prototype developed includes the harvesting circuit implemented with miniaturized, low cost and low consumption electronics and a storage unit consisting of a super capacitors array, forming a truly self-powered system drawing energy from ambient random vibrations of a wide range of characteristics.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
A Particle Swarm Optimization Variant with an Inner Variable Learning Strategy
Pedrycz, Witold; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge. PMID:24587746
Gunavathi, Chellamuthu; Premalatha, Kandasamy
2014-01-01
Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL. PMID:25157377
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Ćujić, Nada; Šavikin, Katarina; Janković, Teodora; Pljevljakušić, Dejan; Zdunić, Gordana; Ibrić, Svetlana
2016-03-01
Traditional maceration method was used for the extraction of polyphenols from chokeberry (Aronia melanocarpa) dried fruit, and the effects of several extraction parameters on the total phenolics and anthocyanins contents were studied. Various solvents, particle size, solid-solvent ratio and extraction time have been investigated as independent variables in two level factorial design. Among examined variables, time was not statistically important factor for the extraction of polyphenols. The optimal extraction conditions were maceration of 0.75mm size berries by 50% ethanol, with solid-solvent ratio of 1:20, and predicted values were 27.7mgGAE/g for total phenolics and 0.27% for total anthocyanins. Under selected conditions, the experimental total phenolics were 27.8mgGAE/g, and total anthocyanins were 0.27%, which is in agreement with the predicted values. In addition, a complementary quantitative analysis of individual phenolic compounds was performed using HPLC method. The study indicated that maceration was effective and simple technique for the extraction of bioactive compounds from chokeberry fruit.
Optimal design of 850 nm 2×2 multimode interference polymer waveguide coupler by imprint technique
NASA Astrophysics Data System (ADS)
Shao, Yuchen; Han, Xiuyou; Han, Xiaonan; Lu, Zhili; Wu, Zhenlin; Teng, Jie; Wang, Jinyan; Morthier, Geert; Zhao, Mingshan
2016-09-01
A 2×2 optical waveguide coupler at 850 nm based on the multimode interference (MMI) structure with the polysilsesquioxanes liquid series (PSQ-Ls) polymer material and the imprint technique is presented. The influence of the structural parameters, such as the single mode condition, the waveguide spacing of input/output ports, and the width and length of the multimode waveguide, on the optical splitting performance including the excess loss and the uniformity is simulated by the beam propagation method. By inserting a taper section of isosceles trapezoid between the single mode and multimode waveguides, the optimized structural parameters for low excess loss and high uniformity are obtained with the excess loss of‒0.040 dB and the uniformity of‒0.007 dB. The effect of the structure deviations induced during the imprint process on the optical splitting performance at different residual layer thicknesses is also investigated. The analysis results provide useful instructions for the waveguide device fabrication.
NASA Technical Reports Server (NTRS)
Granaas, Michael M.; Rhea, Donald C.
1989-01-01
In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.
NASA Astrophysics Data System (ADS)
Cherng, An-Pan
2003-03-01
Placing vibration sensors at appropriate locations plays an important role in experimental modal analysis. It is known that maximising the determinant of Fisher information matrix (FIM) can result in an optimal configuration of sensors from a set of candidate locations. Some methods have already been proposed in the literature, such as maximising the determinant of the diagonal elements of mode shape correlation matrix, ranking the sensor contributions by Hankel singular values (HSVs), and using perturbation theory to achieve minimum variance of estimation, etc. The objectives of this work were to systematically analyse existing methods and to propose methods that either improve their performance or accelerate the searching process for modal parameter identification. The approach used in this article is based on the analytical formulation of singular value decomposition (SVD) for a candidate-blocked Hankel matrix using signal subspace correlation (SSC) techniques developed earlier by the author. The SSC accounts for factors that contribute to the estimated results, such as mode shapes, damping ratios, sampling rate and matrix size (or number of data used). With the aid of SSC, it will be shown that using information of mode shapes and that of singular values are equivalent under certain conditions. The results of this work are not only consistent with those of existing methods, but also demonstrate a more general viewpoint to the optimisation problem. Consequently, the insight of the sensor placement problem is clearly interpreted. Finally, two modified methods that inherit the merits of existing methods are proposed, and their effectiveness is demonstrated by numerical examples.
Post, M J; Kursunoglu, S J; Hensley, G T; Chan, J C; Moskowitz, L B; Hoffman, T A
1985-11-01
A retrospective review of cranial CT scans obtained over a 4 year period in patients with acquired immunodeficiency syndrome (AIDS) and documented central nervous system (CNS) pathology is presented. The spectrum of diseases and the value of CT in detecting new, recurrent, and superimposed disease processes were determined. Fifty-one AIDS patients with confirmed CNS pathology were identified. Six of them had two coexistent diseases. Opportunistic infections predominated, especially Toxoplasma encephalitis and cryptococcal meningitis, while tumor was seen infrequently. Initial CT was positive in 76% of cases. In contrast to meningeal processes, where it was not very effective, CT was very sensitive in detecting most parenchymal disease processes. Characteristic although not pathognomonic CT patterns were found for certain diseases. Improvement or resolution of CT abnormalities in patients on medical therapy for Toxoplasma encephalitis correlated well with clinical improvement. Recurrence of CT abnormalities correlated well with medical noncompliance. The optimal contrast enhancement technique for detecting CNS pathology and for monitoring the effectiveness of medical therapy was also evaluated by a prospective study in which both immediate (IDD) and 1 hr delayed (DDD) double-dose contrast CT scans were compared. The examination found to be diagnostically superior in 30 of the 41 IDD/DDD studies was the delayed scan. It is recommended that CT be used routinely and with the 1 hr DDD scan to evaluate and follow AIDS patients with neurologic symptoms and/or signs.
Sabesan, Shivkumar; Chakravarthy, Niranjan; Tsakalis, Kostas; Pardalos, Panos; Iasemidis, Leon
2009-01-01
Epileptic seizures are manifestations of intermittent spatiotemporal transitions of the human brain from chaos to order. Measures of chaos, namely maximum Lyapunov exponents (STL(max)), from dynamical analysis of the electroencephalograms (EEGs) at critical sites of the epileptic brain, progressively converge (diverge) before (after) epileptic seizures, a phenomenon that has been called dynamical synchronization (desynchronization). This dynamical synchronization/desynchronization has already constituted the basis for the design and development of systems for long-term (tens of minutes), on-line, prospective prediction of epileptic seizures. Also, the criterion for the changes in the time constants of the observed synchronization/desynchronization at seizure points has been used to show resetting of the epileptic brain in patients with temporal lobe epilepsy (TLE), a phenomenon that implicates a possible homeostatic role for the seizures themselves to restore normal brain activity. In this paper, we introduce a new criterion to measure this resetting that utilizes changes in the level of observed synchronization/desynchronization. We compare this criterion's sensitivity of resetting with the old one based on the time constants of the observed synchronization/desynchronization. Next, we test the robustness of the resetting phenomena in terms of the utilized measures of EEG dynamics by a comparative study involving STL(max), a measure of phase (ϕ(max)) and a measure of energy (E) using both criteria (i.e. the level and time constants of the observed synchronization/desynchronization). The measures are estimated from intracranial electroencephalographic (iEEG) recordings with subdural and depth electrodes from two patients with focal temporal lobe epilepsy and a total of 43 seizures. Techniques from optimization theory, in particular quadratic bivalent programming, are applied to optimize the performance of the three measures in detecting preictal entrainment. It is
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2016-07-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
ERIC Educational Resources Information Center
Treichel, Janet
The purpose of the Optimizing Planning Techniques (OPT) for Comprehensive Systems of Guidance, Counseling, Placement, and Follow-Through project was to help local educational agencies systematically plan and efficiently operate comprehensive guidance and counseling programs. The project (1) identified planning models for comprehensive systems of…
NASA Astrophysics Data System (ADS)
Mandal, S. K.; Singh, Harshavardhan; Mahanti, G. K.; Ghatak, Rowdra
2014-10-01
This paper presents a new technique based on optimization tools to design phase only, digitally controlled, reconfigurable antenna arrays through time modulation. In the proposed approach, the on-time durations of the time-modulated elements and the static amplitudes of the array elements are perturbed in such a way that the same on-time sequence and discrete values of static amplitudes for four bit digital attenuators produces either a pencil or a flat-top beam pattern, depending on the suitable discrete phase distributions of five bit digital phase shifters. In order to illustrate the technique, three optimization tools: differential evolution (DE), artificial bee colony (ABC), and particle swarm optimization (PSO) are employed and their performances are compared. The numerical results for a 20-element linear array are presented.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
A Dynamic Optimization Technique for Siting the NASA-Clark Atlanta Urban Rain Gauge Network (NCURN)
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Taylor, Layi
2003-01-01
NASA satellites and ground instruments have indicated that cities like Atlanta, Georgia may create or alter rainfall. Scientists speculate that the urban heat island caused by man-made surfaces in cities impact the heat and wind patterns that form clouds and rainfall. However, more conclusive evidence is required to substantiate findings from satellites. NASA, along with scientists at Clark Atlanta University, are implementing a dense, urban rain gauge network in the metropolitan Atlanta area to support a satellite validation program called Studies of PRecipitation Anomalies from Widespread Urban Landuse (SPRAWL). SPRAWL will be conducted during the summer of 2003 to further identify and understand the impact of urban Atlanta on precipitation variability. The paper provides an. overview of SPRAWL, which represents one of the more comprehensive efforts in recent years to focus exclusively on urban-impacted rainfall. The paper also introduces a novel technique for deploying rain gauges for SPRAWL. The deployment of the dense Atlanta network is unique because it utilizes Geographic Information Systems (GIS) and Decision Support Systems (DSS) to optimize deployment of the rain gauges. These computer aided systems consider access to roads, drainage systems, tree cover, and other factors in guiding the deployment of the gauge network. GIS and DSS also provide decision-makers with additional resources and flexibility to make informed decisions while considering numerous factors. Also, the new Atlanta network and SPRAWL provide a unique opportunity to merge the high-resolution, urban rain gauge network with satellite-derived rainfall products to understand how cities are changing rainfall patterns, and possibly climate.
Planar straightness error evaluation based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Mao, Jian; Zheng, Huawen; Cao, Yanlong; Yang, Jiangxin
2006-11-01
The straightness error generally refers to the deviation between an actual line and an ideal line. According to the characteristics of planar straightness error evaluation, a novel method to evaluate planar straightness errors based on the particle swarm optimization (PSO) is proposed. The planar straightness error evaluation problem is formulated as a nonlinear optimization problem. According to minimum zone condition the mathematical model of planar straightness together with the optimal objective function and fitness function is developed. Compared with the genetic algorithm (GA), the PSO algorithm has some advantages. It is not only implemented without crossover and mutation but also has fast congruence speed. Moreover fewer parameters are needed to set up. The results show that the PSO method is very suitable for nonlinear optimization problems and provides a promising new method for straightness error evaluation. It can be applied to deal with the measured data of planar straightness obtained by the three-coordinates measuring machines.
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services. PMID:22492176
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2015-01-01
The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Dávila Pintle, José A; Lara, Edmundo Reynoso; Iturbe Castillo, Marcelo D
2013-07-01
It is presented a criteria for selecting the optimum aperture radius for the one beam Z-scan technique (OBZT), based on the analysis of the transmittance of the aperture. It is also presented a modification to the OBZT by directly measuring the beam radius in the far field with a rotating disk, which allows to determine simultaneously the non-linear absorptive coefficient and non-linear refractive index, much less sensitive to wave front distortions caused by inhomogeneities of the sample with a negligible loss of signal to noise ratio. It is demonstrated its equivalence to the OBZT.
Diesel Engine performance improvement in a 1-D engine model using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Karra, Prashanth
2015-12-01
A particle swarm optimization (PSO) technique was implemented to improve the engine development and optimization process to simultaneously reduce emissions and improve the fuel efficiency. The optimization was performed on a 4-stroke 4-cylinder GT-Power based 1-D diesel engine model. To achieve the multi-objective optimization, a merit function was defined which included the parameters to be optimized: Nitrogen Oxides (NOx), Nonmethyl hydro carbons (NMHC), Carbon Monoxide (CO), Brake Specific Fuel Consumption (BSFC). EPA Tier 3 emissions standards for non-road diesel engines between 37 and 75 kW of output were chosen as targets for the optimization. The combustion parameters analyzed in this study include: Start of main Injection, Start of Pilot Injection, Pilot fuel quantity, Swirl, and Tumble. The PSO was found to be very effective in quickly arriving at a solution that met the target criteria as defined in the merit function. The optimization took around 40-50 runs to find the most favourable engine operating condition under the constraints specified in the optimization. In a favourable case with a high merit function values, the NOx+NMHC and CO values were reduced to as low as 2.9 and 0.014 g/kWh, respectively. The operating conditions at this point were: 10 ATDC Main SOI, -25 ATDC Pilot SOI, 0.25 mg of pilot fuel, 0.45 Swirl and 0.85 tumble. These results indicate that late main injections preceded by a close, small pilot injection are most favourable conditions at the operating condition tested.
77 FR 42737 - Patient Safety Organizations: Delisting for Cause for The Steward Group PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Delisting for Cause for The Steward Group PSO AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS....
Affiliation, joint venture or PSO? Case studies show why provider strategies differ.
1998-03-01
Joint venture, affiliation or PSO? Here are three case studies of providers who chose different paths under Medicare risk, plus some key questions you'll want to ask of your own provider organization. Learn from these examples so you'll make the best contracting decisions.
76 FR 60494 - Patient Safety Organizations: Voluntary Relinquishment From HPI-PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109... Patient Safety Act authorizes the listing of PSOs, which are entities or component organizations...
76 FR 7854 - Patient Safety Organizations: Voluntary Delisting From Lumetra PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109... Patient Safety Act authorizes the listing of PSOs, which are entities or component organizations...
76 FR 7853 - Patient Safety Organizations: Voluntary Delisting From HealthDataPSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... status as a Patient Safety Organization (PSO). The Patient Safety and Quality Improvement Act of 2005... organizations whose mission and primary activity is to conduct activities to improve patient safety and...
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
A new method for ship inner shell optimization based on parametric technique
NASA Astrophysics Data System (ADS)
Yu, Yan-Yun; Lin, Yan; Li, Kai
2015-01-01
A new method for ship Inner Shell optimization, which is called Parametric Inner Shell Optimization Method (PISOM), is presented in this paper in order to improve both hull performance and design efficiency of transport ship. The foundation of PISOM is the parametric Inner Shell Plate (ISP) model, which is a fully-associative model driven by dimensions. A method to create parametric ISP model is proposed, including geometric primitives, geometric constraints, geometric constraint solving etc. The standard optimization procedure of ship ISP optimization based on parametric ISP model is put forward, and an efficient optimization approach for typical transport ship is developed based on this procedure. This approach takes the section area of ISP and the other dominant parameters as variables, while all the design requirements such as propeller immersion, fore bottom wave slap, bridge visibility, longitudinal strength etc, are made constraints. The optimization objective is maximum volume of cargo oil tanker/cargo hold, and the genetic algorithm is used to solve this optimization model. This method is applied to the optimization of a product oil tanker and a bulk carrier, and it is proved to be effective, highly efficient, and engineering practical.
NASA Astrophysics Data System (ADS)
Liu, Yutong; Uberti, Mariano; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael D.
2009-02-01
Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.
Shkumat, N. A.; Siewerdsen, J. H.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.
2008-02-15
Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp{sup L}/kVp{sup H}]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5--i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis)
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Expedite Particle Swarm Optimization Algorithm (EPSO) for Optimization of MSA
NASA Astrophysics Data System (ADS)
Rathi, Amit; Vijay, Ritu
This paper presents a new designing method of Rectangular patch Microstrip Antenna using an Artificial searches Algorithm with some constraints. It requires two stages for designing. In first stage, bandwidth of MSA is modeled using bench Mark function. In second stage, output of first stage give to modified Artificial search Algorithm which is Particle Swarm Algorithm (PSO) as input and get output in the form of five parameter- dimensions width, frequency range, dielectric loss tangent, length over a ground plane with a substrate thickness and electrical thickness. In PSO Cognition, factor and Social learning Factor give very important effect on balancing the local search and global search in PSO. Basing the modification of cognition factor and social learning factor, this paper presents the strategy that at the starting process cognition-learning factor has more effect then social learning factor. Gradually social learning factor has more impact after learning cognition factor for find out global best. The aim is to find out under above circumstances these modifications in PSO can give better result for optimization of microstrip Antenna (MSA).
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
NASA Astrophysics Data System (ADS)
St. Germain, Brad David
The development and optimization of liquid rocket engines is an integral part of space vehicle design, since most Earth-to-orbit launch vehicles to date have used liquid rockets as their main propulsion system. Rocket engine design tools range in fidelity from very simple conceptual level tools to full computational fluid dynamics (CFD) simulations. The level of fidelity of interest in this research is a design tool that determines engine thrust and specific impulse as well as models the powerhead of the engine. This is the highest level of fidelity applicable to a conceptual level design environment where faster running analyses are desired. The optimization of liquid rocket engines using a powerhead analysis tool is a difficult problem, because it involves both continuous and discrete inputs as well as a nonlinear design space. Example continuous inputs are the main combustion chamber pressure, nozzle area ratio, engine mixture ratio, and desired thrust. Example discrete variable inputs are the engine cycle (staged-combustion, gas generator, etc.), fuel/oxidizer combination, and engine material choices. Nonlinear optimization problems involving both continuous and discrete inputs are referred to as Mixed-Integer Nonlinear Programming (MINLP) problems. Many methods exist in literature for solving MINLP problems; however none are applicable for this research. All of the existing MINLP methods require the relaxation of the discrete variables as part of their analysis procedure. This means that the discrete choices must be evaluated at non-discrete values. This is not possible with an engine powerhead design code. Therefore, a new optimization method was developed that uses modified response surface equations to provide lower bounds of the continuous design space for each unique discrete variable combination. These lower bounds are then used to efficiently solve the optimization problem. The new optimization procedure was used to find optimal rocket engine designs
An analytic study of near terminal area optimal sequencing and flow control techniques
NASA Technical Reports Server (NTRS)
Park, S. K.; Straeter, T. A.; Hogge, J. E.
1973-01-01
Optimal flow control and sequencing of air traffic operations in the near terminal area are discussed. The near terminal area model is based on the assumptions that the aircraft enter the terminal area along precisely controlled approach paths and that the aircraft are segregated according to their near terminal area performance. Mathematical models are developed to support the optimal path generation, sequencing, and conflict resolution problems.
Reduced order techniques for sensitivity analysis and design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Parrish, Jefferson Carter
This work proposes a new method for using reduced order models in lieu of high fidelity analysis during the sensitivity analysis step of gradient based design optimization. The method offers a reduction in the computational cost of finite difference based sensitivity analysis in that context. The method relies on interpolating reduced order models which are based on proper orthogonal decomposition. The interpolation process is performed using radial basis functions and Grassmann manifold projection. It does not require additional high fidelity analyses to interpolate a reduced order model for new points in the design space. The interpolated models are used specifically for points in the finite difference stencil during sensitivity analysis. The proposed method is applied to an airfoil shape optimization (ASO) problem and a transport wing optimization (TWO) problem. The errors associated with the reduced order models themselves as well as the gradients calculated from them are evaluated. The effects of the method on the overall optimization path, computation times, and function counts are also examined. The ASO results indicate that the proposed scheme is a viable method for reducing the computational cost of these optimizations. They also indicate that the adaptive step is an effective method of improving interpolated gradient accuracy. The TWO results indicate that the interpolation accuracy can have a strong impact on optimization search direction.
NASA Astrophysics Data System (ADS)
Yan, Su; Ghasemi-Nejhad, Mehrdad N.
2003-07-01
In this paper, a model of the adaptive composite panel surfaces with piezoelectric patches is built using the Rayleigh-Ritz method based on the laminate theory. The interia and stiffness of the actuators are considered in the developed model. An optimal actuator location has been proved to be desirable since the piezoelectric actuators often have limitations of delivering large power oiutputs. Due to its effectiveness in seraching optimal design parameters and obtaining globally optimal solutions, the genetic algorithm has been applied to find optimal locations of piezoelectric actuators for the vibration control of a smart composite beam. In addition, the effects of population size, the crossover probability, and the mutation probability on the convergence of the genetic algorithm are investigated. Meanwhile, linear quadric regulator (LQR) and disturbance observer (DOB) are employed for the vibration suppression of the optimized adaptive composite beam (ACB). The experimental results show the robustness of the DOB, which can successfully suppress the vibrations of the cantilevered ACB according to the optimization results in an uncertain system.
Munari, Fernanda M; Revers, Luis F; Cardone, Jacqueline M; Immich, Bruna F; Moura, Dinara J; Guecheva, Temenouga N; Bonatto, Diego; Laurino, Jomar P; Saffi, Jenifer; Brendel, Martin; Henriques, João A P
2014-01-01
By isolating putative binding partners through the two-hybrid system (THS) we further extended the characterization of the specific interstrand cross-link (ICL) repair gene PSO2 of Saccharomyces cerevisiae. Nine fusion protein products were isolated for Pso2p using THS, among them the Sak1 kinase, which interacted with the C-terminal β-CASP domain of Pso2p. Comparison of mutagen-sensitivity phenotypes of pso2Δ, sak1Δ and pso2Δsak1Δ disruptants revealed that SAK1 is necessary for complete WT-like repair. The epistatic interaction of both mutant alleles suggests that Sak1p and Pso2p act in the same pathway of controlling sensitivity to DNA-damaging agents. We also observed that Pso2p is phosphorylated by Sak1 kinase in vitro and co-immunoprecipitates with Sak1p after 8-MOP+UVA treatment. Survival data after treatment of pso2Δ, yku70Δ and yku70Δpso2Δ with nitrogen mustard, PSO2 and SAK1 with YKU70 or DNL4 single-, double- and triple mutants with 8-MOP+UVA indicated that ICL repair is independent of YKu70p and DNL4p in S. cerevisiae. Furthermore, a non-epistatic interaction was observed between MRE11, PSO2 and SAK1 genes after ICL induction, indicating that their encoded proteins act on the same substrate, but in distinct repair pathways. In contrast, an epistatic interaction was observed for PSO2 and RAD52, PSO2 and RAD50, PSO2 and XRS2 genes in 8-MOP+UVA treated exponentially growing cells. PMID:24362320
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
NASA Astrophysics Data System (ADS)
Rahman, Md Ashiqur; Anwar, Sohel; Izadian, Afshin
2016-03-01
In this paper, a gradient-free optimization technique, namely particle swarm optimization (PSO) algorithm, is utilized to identify specific parameters of the electrochemical model of a Lithium-Ion battery with LiCoO2 cathode chemistry. Battery electrochemical model parameters are subject to change under severe or abusive operating conditions resulting in, for example, over-discharged battery, over-charged battery, etc. It is important for a battery management system to have these parameter changes fully captured in a bank of battery models that can be used to monitor battery conditions in real time. Here the PSO methodology has been successfully applied to identify four electrochemical model parameters that exhibit significant variations under severe operating conditions: solid phase diffusion coefficient at the positive electrode (cathode), solid phase diffusion coefficient at the negative electrode (anode), intercalation/de-intercalation reaction rate at the cathode, and intercalation/de-intercalation reaction rate at the anode. The identified model parameters were used to generate the respective battery models for both healthy and degraded batteries. These models were then validated by comparing the model output voltage with the experimental output voltage for the stated operating conditions. The identified Li-Ion battery electrochemical model parameters are within reasonable accuracy as evidenced by the experimental validation results.
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-01-01
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-05-07
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances.
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-01-01
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Evaluation of Optimization Methods for Hydrologic Model Calibration in Ontario Basins
NASA Astrophysics Data System (ADS)
Razavi, T.; Coulibaly, P. D.
2013-12-01
Particle Swarm Optimization algorithm (PSO), Shuffled Complex Evolution algorithm (SCE), Non-Dominated Sorted Genetic algorithm II (NSGA II) and a Monte Carlo procedure are applied to optimize the calibration of two conceptual hydrologic models namely the Sacramento Soil Moisture Accounting (SAC-SMA) and McMaster University-Hydrologiska Byråns Vattenbalansavdelning (MAC-HBV). PSO, SCE, and NSGA II are inherently evolutionary computational methods with a potential of reaching the global optimum in contrast to stochastic search algorithms such as Monte Carlo method. The spatial analysis maps of Nash Sutcliffe Efficiency (NSE) for daily streamflow and Volume Error (VE) for peak and low flows demonstrate that for both MAC-HBV and SAC-SMA, PSO and SCE are equally superior to NSGAII and Monte Carlo for all the selected 90 basins across Ontario (Canada) using 20 years (1976-1994) of hydrologic records. For peakflows, MAC-HBV with PSO has generally better performance compared to SCE, whereas SAC-SMA with SCE and PSO indicate similar performance. For low flows, MAC-HBV with PSO has a better performance for most of the northern large watersheds while SCE has a better performance for southern small watersheds. Temporal variability of NSE values for daily streamflow show that all the optimization methods perform better for the winter season compared to the summer.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Enabling a viable technique for the optimization of LNG carrier cargo operations
NASA Astrophysics Data System (ADS)
Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.
2016-09-01
In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.
Enabling a viable technique for the optimization of LNG carrier cargo operations
NASA Astrophysics Data System (ADS)
Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.
2016-07-01
In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.
NASA Technical Reports Server (NTRS)
Martini, William R.
1989-01-01
A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
General optimization technique for high-quality community detection in complex networks
NASA Astrophysics Data System (ADS)
Sobolevsky, Stanislav; Campari, Riccardo; Belyi, Alexander; Ratti, Carlo
2014-07-01
Recent years have witnessed the development of a large body of algorithms for community detection in complex networks. Most of them are based upon the optimization of objective functions, among which modularity is the most common, though a number of alternatives have been suggested in the scientific literature. We present here an effective general search strategy for the optimization of various objective functions for community detection purposes. When applied to modularity, on both real-world and synthetic networks, our search strategy substantially outperforms the best existing algorithms in terms of final scores of the objective function. In terms of execution time for modularity optimization this approach also outperforms most of the alternatives present in literature with the exception of fastest but usually less efficient greedy algorithms. The networks of up to 30000 nodes can be analyzed in time spans ranging from minutes to a few hours on average workstations, making our approach readily applicable to tasks not limited by strict time constraints but requiring the quality of partitioning to be as high as possible. Some examples are presented in order to demonstrate how this quality could be affected by even relatively small changes in the modularity score stressing the importance of optimization accuracy.
Miller, Christopher J; Antunes, Marcelo B; Sobanko, Joseph F
2015-03-01
Sound surgical technique is necessary to achieve excellent surgical outcomes. Despite the fact that dermatologists perform more office-based cutaneous surgery than any other specialty, few dermatologists have opportunities for practical instruction to improve surgical technique after residency and fellowship. This 2-part continuing medical education article will address key principles of surgical technique at each step of cutaneous reconstruction. Part I reviews incising, excising, and undermining. Objective quality control questions are proposed to provide a framework for self-assessment and continuous quality improvement.
NASA Astrophysics Data System (ADS)
Cuevas Vivas, Gabriel Francisco
A methodology to optimize enrichment distributions in Light Water Reactor (LWR) fuel assemblies is developed and tested. The optimization technique employed is the linear programming revised simplex method, and the fuel assembly's performance is evaluated with a neutron transport code that is also utilized in the calculation of sensitivity coefficients. The enrichment distribution optimization procedure begins from a single-value (flat) enrichment distribution until a target, maximum local power peaking factor, is achieved. The optimum rod enrichment distribution, with 1.00 for the maximum local power peaking factor and with each rod having its own enrichment, is calculated at an intermediate stage of the analysis. Later, the best locations and values for a reduced number of rod enrichments is obtained as a function of a target maximum local power peaking factor by applying sensitivity to change techniques. Finally, a shuffling process that assigns individual rod enrichments among the enrichment groups is performed. The relative rod power distribution is then slightly modified and the rod grouping redefined until the optimum configuration is attained. To verify the accuracy of the relative rod power distribution, a full computation with the neutron transport code using the optimum enrichment distribution is carried out. The results are compared and tested for assembly designs loaded with fresh Low Enriched Uranium (LEU) and plutonium Mixed OXide (MOX) fuels. MOX isotopics for both reactor-grade and weapons-grade plutonium were utilized to demonstrate the wide-range of applicability of the optimization technique. The features of the assembly designs used for evaluation purposes included burnable absorbers and internal water regions, and were prepared to resemble the configurations of modern assemblies utilized in commercial Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). In some cases, a net improvement in the relative rod power distribution or
Selectively-informed particle swarm optimization.
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
NASA Astrophysics Data System (ADS)
Gao, Yang; Du, Wenbo; Yan, Gang
2015-03-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
NASA Astrophysics Data System (ADS)
Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad
2008-04-01
To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Bréchet, Thierry; Tulkens, Henry
2009-04-01
Technological choices are multi-dimensional and thus one needs a multi-dimensional methodology to identify best available techniques. Moreover, in the presence of environmental externalities generated by productive activities, 'best' available techniques should be best from Society's point of view, not only in terms of private interests. In this paper we present a modeling framework based on methodologies appropriate to serve these two purposes, namely linear programming and internalization of external costs. We develop it as an operational decision tool, of interest for both firms and regulators, and we apply it to a plant in the lime industry. We show why, in this context, there is in general not a single best available technique (BAT), but well a best combination of available techniques to be used (BCAT). PMID:19108944
Optimization of the tungsten oxide technique for measurement of atmospheric ammonia
NASA Technical Reports Server (NTRS)
Brown, Kenneth G.
1987-01-01
Hollow tubes coated with tungstic acid have been shown to be of value in the determination of ammonia and nitric acid in ambient air. Practical application of this technique was demonstrated utilizing an automated sampling system for in-flight collection and analysis of atmospheric samples. Due to time constraints these previous measurements were performed on tubes that had not been well characterized in the laboratory. As a result the experimental precision could not be accurately estimated. Since the technique was being compared to other techniques for measuring these compounds, it became necessary to perform laboratory tests which would establish the reliability of the technique. This report is a summary of these laboratory experiments as they are applied to the determination of ambient ammonia concentration.
Converting PSO dynamics into complex network - Initial study
NASA Astrophysics Data System (ADS)
Pluhacek, Michal; Janostik, Jakub; Senkerik, Roman; Zelinka, Ivan
2016-06-01
In this paper it is presented the initial study on the possibility of capturing the inner dynamic of Particle Swarm Optimization algorithm into a complex network structure. Inspired in previous works there are two different approaches for creating the complex network presented in this paper. Visualizations of the networks are presented and commented. The possibilities for future applications of the proposed design are given in detail.
Taneja, Sakshi; Shilpi, Satish; Khatri, Kapil
2016-05-01
Efavirenz is a non-nucleoside reverse transcriptase inhibitor, and is classiﬁed as BCS Class II API. Its erratic oral absorption and poor bioavailability make it a potential candidate for being formulated as a nanosuspension. The objective of this study was to formulate efavirenz nanosuspensions employing the antisolvent precipitation-ultrasonication method, and to enhance its solubility by reducing particle size to the nanometer range. The effects of different process parameters were studied and optimized with respect to particle size and poly dispersity index (PDI). The optimized formulation was also subjected to lyophilization, to further increase the solubility and stability, and the technology is potentially suited to a range of poorly water-soluble compounds.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
NASA Technical Reports Server (NTRS)
Levy, R.; Chai, K.
1978-01-01
A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Green, Lawrence L.
1999-01-01
A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the design of complex engineering systems and subsystems that coherently exploits the synergism of mutually interacting phenomena. As evidenced by the papers, which appear in the biannual AIAA/USAF/NASA/ISSMO Symposia on Multidisciplinary Analysis and Optimization, the MDO technical community focuses on vehicle and system design issues. This paper provides an overview of the MDO technology field from a fluid dynamics perspective, giving emphasis to suggestions of specific applications of recent MDO technologies that can enhance fluid dynamics research itself across the spectrum, from basic flow physics to full configuration aerodynamics.
NASA Astrophysics Data System (ADS)
Verma, Harish Kumar; Jain, Cheshta
2016-09-01
In this article, a hybrid algorithm of particle swarm optimization (PSO) with statistical parameter (HSPSO) is proposed. Basic PSO for shifted multimodal problems have low searching precision due to falling into a number of local minima. The proposed approach uses statistical characteristics to update the velocity of the particle to avoid local minima and help particles to search global optimum with improved convergence. The performance of the newly developed algorithm is verified using various standard multimodal, multivariable, shifted hybrid composition benchmark problems. Further, the comparative analysis of HSPSO with variants of PSO is tested to control frequency of hybrid renewable energy system which comprises solar system, wind system, diesel generator, aqua electrolyzer and ultra capacitor. A significant improvement in convergence characteristic of HSPSO algorithm over other variants of PSO is observed in solving benchmark optimization and renewable hybrid system problems.
Use of the particle swarm optimization algorithm for second order design of levelling networks
NASA Astrophysics Data System (ADS)
Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer
2009-08-01
The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach. PMID:25140288
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Demonstration of optimization techniques for groundwater plumeremediation using iTOUGH2
Finsterle, Stefan
2004-11-11
We examined the potential use of standard optimization algorithms as implemented in the inverse modeling code iTOUGH2 (Finsterle, 1999abc) for the solution of aquifer remediation problems. Costs for the removal of dissolved or free-phase contaminants depend on aquifer properties, the chosen remediation technology, and operational parameters (such as number of wells drilled and pumping rates). A cost function must be formulated that may include actual costs and hypothetical penalty costs for incomplete cleanup; the total cost function is therefore a measure of the overall effectiveness and efficiency of the proposed remediation scenario. The cost function is then minimized by automatically adjusting certain decision or operational parameters. We evaluate the impact of these operational parameters on remediation using a three-phase, three-component flow and transport simulator, which is linked to nonlinear optimization routines. We demonstrate that the methods developed for automatic model calibration are capable of minimizing arbitrary cost functions. An example of co-injection of air and steam makes evident the need for coupling optimization routines with an accurate state-of-the-art process simulator. Simplified models are likely to miss significant system behaviors such as increased downward mobilization due to recondensation of contaminants during steam flooding, which can be partly suppressed by the co-injection of air.
Mackey-Glass noisy chaotic time series prediction by a swarm-optimized neural network
NASA Astrophysics Data System (ADS)
López-Caraballo, C. H.; Salfate, I.; Lazzús, J. A.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2016-05-01
In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass noiseless chaotic time series in the short-term and long-term prediction. The performance prediction is evaluated and compared with similar work in the literature, particularly for the long-term forecast. Also, we present properties of the dynamical system via the study of chaotic behaviour obtained from the time series prediction. Then, this standard hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions that also allowed us compute uncertainties of predictions for noisy Mackey-Glass chaotic time series. We study the impact of noise for three cases with a white noise level (σ N ) contribution of 0.01, 0.05 and 0.1.
Resection of Diminutive and Small Colorectal Polyps: What Is the Optimal Technique?
Lee, Jun
2016-01-01
Colorectal polyps are classified as neoplastic or non-neoplastic on the basis of malignant potential. All neoplastic polyps should be completely removed because both the incidence of colorectal cancer and the mortality of colorectal cancer patients have been found to be strongly correlated with incomplete polypectomy. The majority of colorectal polyps discovered on diagnostic colonoscopy are diminutive and small polyps; therefore, complete resection of these polyps is very important. However, there is no consensus on a method to remove diminutive and small polyps, and various techniques have been adopted based on physician preference. The aim of this article was to review the diverse techniques used to remove diminutive and small polyps and to suggest which technique will be the most effective. PMID:27450226
NASA Astrophysics Data System (ADS)
Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.
2013-11-01
Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
2012-01-01
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R(2) = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful.
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
2012-01-01
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R2 = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful. PMID:22942683
Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.
ERIC Educational Resources Information Center
Bose, Anindya
The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…
Technology Transfer Automated Retrieval System (TEKTRAN)
This study evaluated the impact of gas concentration and wind sensor locations on the accuracy of the backward Lagrangian stochastic inverse-dispersion technique (bLS) for measuring gas emission rates from a typical lagoon environment. Path-integrated concentrations (PICs) and 3-dimensional (3D) wi...
Modenese, Luca; Ceseracciu, Elena; Reggiani, Monica; Lloyd, David G
2016-01-25
A challenging aspect of subject specific musculoskeletal modeling is the estimation of muscle parameters, especially optimal fiber length and tendon slack length. In this study, the method for scaling musculotendon parameters published by Winby et al. (2008), J. Biomech. 41, 1682-1688, has been reformulated, generalized and applied to two cases of practical interest: 1) the adjustment of muscle parameters in the entire lower limb following linear scaling of a generic model and 2) their estimation "from scratch" in a subject specific model of the hip joint created from medical images. In the first case, the procedure maintained the muscles׳ operating range between models with mean errors below 2.3% of the reference model normalized fiber length value. In the second case, a subject specific model of the hip joint was created using segmented bone geometries and muscle volumes publicly available for a cadaveric specimen from the Living Human Digital Library (LHDL). Estimated optimal fiber lengths were found to be consistent with those of a previously published dataset for all 27 considered muscle bundles except gracilis. However, computed tendon slack lengths differed from tendon lengths measured in the LHDL cadaver, suggesting that tendon slack length should be determined via optimization in subject-specific applications. Overall, the presented methodology could adjust the parameters of a scaled model and enabled the estimation of muscle parameters in newly created subject specific models. All data used in the analyses are of public domain and a tool implementing the algorithm is available at https://simtk.org/home/opt_muscle_par.
Toward a systematic design theory for silicon solar cells using optimization techniques
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
This work is a first detailed attempt to systematize the design of silicon solar cells. Design principles follow from three theorems. Although the results hold only under low injection conditions in base and emitter regions, they hold for arbitrary doping profiles and include the effects of drift fields, high/low junctions and heavy doping concentrations of donor or acceptor atoms. Several optimal designs are derived from the theorems, one of which involves a three-dimensional morphology in the emitter region. The theorems are derived from a nonlinear differential equation of the Riccati form, the dependent variable of which is a normalized recombination particle current.
NASA Technical Reports Server (NTRS)
Washburn, R. B.; Mehra, R. K.; Sajan, S.
1979-01-01
The singular perturbation theory (SPT) approximation of optimal feedback control laws is presented and methods for on-line application of these approximations are discussed. It is demonstrated that SPT control laws break down when the current state is near the terminal target state. The use of continuation methods to improve the accuracy of the SPT approximation and to obtain global solutions of two-point boundary value problems is also discussed. As an illustration, consideration is given to the minimum-time control of a supersonic aircraft for a three-dimensional intercept problem.
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1990-01-01
A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus. PMID:27169290
Optimization of thermal performance of Ranque Hilsch Vortex Tube: MADM techniques
NASA Astrophysics Data System (ADS)
Devade, K. D.; Pise, A. T.
2016-08-01
Thermal performance of vortex tube is noticeably influenced by its geometrical and operational parameters. In this study effect of various geometrical (L/D ratio: 15, 16, 17, 18; exit valve angle; 300, 450, 600, 750, 900; cold end orifice diameter: 5, 6 and 7mm, tube divergence angle: 00, 20, 30, 40) and operational parameters (inlet pressure: 2 to 6 bars) on the performance of vortex tube have been investigated experimentally. Multiple Attribute Decision Making (MADM) techniques are applied to determine the optimum combination of the vortex tube. Performance of vortex tube was analysed with optimum temperature difference on cold end, COP for cooling. The MADM (Multiple Attribute Decision Making) methods, namely WSM (Weighted Sum Method), WPM (Weighted Power Method), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytical Hierarchy Process) are applied. Experimental best performing combinations are obtained for Length to Diameter ratios 15, 16, 17 with exit valve angle as 450,750 and 900 at orifice diameter 5mm for inlet pressure of 5 and 6 bar pressure. Best COP, efficiency and cold end temperature difference are 0.245, 40.6% and 38.3K respectively for the combination of 15 L/D, 450 valve angle, 5mm orifice diameter and 2 bar pressure by MADM techniques.
Optimization of non-oxidative carbon-removal techniques by nitrogen-containing plasmas
NASA Astrophysics Data System (ADS)
Ferreira, J. A.; Tabarés, F. L.; Tafalla, D.
2009-06-01
The continuous control of tritium inventory in ITER calls for the development of new conditioning techniques [G. Federici et al., Nucl. Fus. 41 (2001) 1967]. For carbon plasma-facing components, this implies the removal of the T-rich carbon co-deposits. In the presence of strong oxygen getters, such Be, the use of oxygen-based techniques will be discouraged. In addition, tritiated water generated by these techniques poses extra problems in terms of safety issues [G. Saji, Fus. Eng. Des. 69 (2003) 631; G. Bellanger, J.J. Rameau, Fus. Technol. 32 (1997) 196; T. Hayashi, et al., Fus. Eng. Des. 81 (2006) 1365]. In the present work, oxygen-free (nitrogen and ammonia) glow discharge plasmas for carbon film removal were investigated. The following gas mixtures were fed into a DC glow discharge running in a ˜200 nm carbon film coated chamber. Erosion rate was measured in situ by laser interferometry, RGA (Residual Gas Analysis) and CTAMS (Cryotrapping Assisted Mass Spectrometry) [J.A. Ferreira, F.L. Tabarés, J. Vac. Sci. Technol. A25(2) (2007) 246] were used for the characterization of the reaction products. Very high erosion rates (similar to those obtained in helium-oxygen glow discharge [J.A. Ferreira et al., J. Nucl. Mater. 363-365 (2007) 252]) were recorded for ammonia glow discharge.
NASA Astrophysics Data System (ADS)
Erfanifard, Yousef; Stereńczak, Krzysztof; Behnia, Negin
2014-01-01
Estimating the optimal parameters of some classification techniques becomes their negative aspect as it affects their performance for a given dataset and reduces classification accuracy. It was aimed to optimize the combination of effective parameters of support vector machine (SVM), artificial neural network (ANN), and object-based image analysis (OBIA) classification techniques by the Taguchi method. The optimized techniques were applied to delineate crowns of Persian oak coppice trees on UltraCam-D very high spatial resolution aerial imagery in Zagros semiarid woodlands, Iran. The imagery was classified and the maps were assessed by receiver operating characteristic curve and other performance metrics. The results showed that Taguchi is a robust approach to optimize the combination of effective parameters in these image classification techniques. The area under curve (AUC) showed that the optimized OBIA could well discriminate tree crowns on the imagery (AUC=0.897), while SVM and ANN yielded slightly less AUC performances of 0.819 and 0.850, respectively. The indices of accuracy (0.999) and precision (0.999) and performance metrics of specificity (0.999) and sensitivity (0.999) in the optimized OBIA were higher than with other techniques. The optimization of effective parameters of image classification techniques by the Taguchi method, thus, provided encouraging results to discriminate the crowns of Persian oak coppice trees on UltraCam-D aerial imagery in Zagros semiarid woodlands.
Application of fuel/time minimization techniques to route planning and trajectory optimization
NASA Technical Reports Server (NTRS)
Knox, C. E.
1984-01-01
Rising fuel costs combined with other economic pressures have resulted in industry requirements for more efficient air traffic control and airborne operations. NASA has responded with an on-going research program to investigate the requirements and benefits of using new airborne guidance and pilot procedures that are compatible with advanced air traffic control systems and that will result in more fuel efficient flight. The results of flight testing an airborne computer algorithm designed to provide either open-loop or closed-loop guidance for fuel efficient descents while satisfying time constraints imposed by the air traffic control system is summarized. Some of the potential cost and fuel savings that are obtained with sophisticated vertical path optimization capabilities are described.
Compiler optimization technique for data cache prefetching using a small CAM array
Chi, C.H.
1994-12-31
With advances in compiler optimization and program flow analysis, software assisted cache prefetching schemes using PREFETCH instructions are now possible. Although data can be prefetched accurately into the cache, the runtime overhead associated with these schemes often limits their practical use. In this paper, we propose a new scheme, called the Strike-CAM Data Prefetching (SCP), to prefetch array references with constant strides accurately. Compared to current software assisted data prefetching schemes, the SCP scheme has much lower runtime overhead without sacrificing prefetching accuracy. Our result showed that the SCP scheme is particularly suitable for computing intensive scientific applications where cache misses are mainly due to array references with constant strides and they can be prefetched very accurately by this SCP scheme.
Application of thermoeconomic techniques to the optimization of a rotary regenerator
Kotas, T.J.; Jassim, R.K.; Cheung, C.F. )
1991-01-01
The geometry of the matrix of a rotary regenerator is optimized using the unit cost of the exergy of the warm air delivered as the objective function. The running cost is determined using different unit costs for the pressure component of exergy, E{sup {Delta}P}, and for the thermal component of exergy, E{sup {Delta}T}. The ratio of the two unit costs is obtained from a model of a hypothetical plant in which the two forms of exergy are generated simultaneously. In this paper the effect of variation of the principal design parameters on the unit cost of the warm air and on the heat exchange effectiveness are examined and recommendations are made for the selection of the most appropriate parameters for a regenerator of a given capacity.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
Application of optimization techniques to near terminal area sequencing and flow control.
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Park, S. K.; Hogge, J. E.
1972-01-01
Development of an arrival air-traffic management system for a single runway. Traffic is segregated throughout most of the near terminal area according to performance characteristics. Nominal approach routes for each class of aircraft are determined by an optimization procedure. In this fashion, the nominal approach routes are dependent upon and, hence, determined by the near terminal area operating capabilities of each class of aircraft. The landing order and spacing of aircraft on the common approach path are determined so that a measure of total system deviation from the nominal landing times is minimized and safety standards are met. Delay maneuvers required to satisfy sequencing needs are then carried out in a manner dependent upon the particular class of aircraft being maneuvered. Finally, results are presented to illustrate the effects of the rate of arrivals upon a one-runway system serving three different classes of aircraft employing several different sequencing strategies and measures of total system deviation.
Simunek, J.; Nimmo, J.R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through
NASA Astrophysics Data System (ADS)
Trudinger, Cathy M.; Raupach, Michael R.; Rayner, Peter J.; Kattge, Jens; Liu, Qing; Pak, Bernard; Reichstein, Markus; Renzullo, Luigi; Richardson, Andrew D.; Roxburgh, Stephen H.; Styles, Julie; Wang, Ying Ping; Briggs, Peter; Barrett, Damian; Nikolova, Sonja
2007-06-01
We describe results of a project known as OptIC (Optimisation InterComparison) for comparison of parameter estimation methods in terrestrial biogeochemical models. A highly simplified test model was used to generate pseudo-data to which noise with different characteristics was added. Participants in the OptIC project were asked to estimate the model parameters used to generate this data, and to predict model variables into the future. Ten participants contributed results using one of the following methods: Levenberg-Marquardt, adjoint, Kalman filter, Markov chain Monte Carlo and genetic algorithm. Methods differed in how they locate the minimum (gradient-descent or global search), how observations are processed (all at once sequentially), or the number of iterations used, or assumptions about the statistics (some methods assume Gaussian probability density functions; others do not). We found the different methods equally successful at estimating the parameters in our application. The biggest variation in parameter estimates arose from the choice of cost function, not the choice of optimization method. Relatively poor results were obtained when the model-data mismatch in the cost function included weights that were instantaneously dependent on noisy observations. This was the case even when the magnitude of residuals varied with the magnitude of observations. Missing data caused estimates to be more scattered, and the uncertainty of predictions increased correspondingly. All methods gave biased results when the noise was temporally correlated or non-Gaussian, or when incorrect model forcing was used. Our results highlight the need for care in choosing the error model in any optimization.
Residential fuel cell energy systems performance optimization using "soft computing" techniques
NASA Astrophysics Data System (ADS)
Entchev, Evgueniy
Stationary residential and commercial fuel cell cogeneration systems have received increasing attention by the general public due to their great potential to supply both thermal and electrical loads to the dwellings. The reported number of field demonstration trials with grid connected and off-grid applications are under way and valuable and unique data are collected to describe the system's performance. While the single electricity mode of operation is relatively easy to introduce, it is characterized with relatively low efficiency performance (20-35%). The combined heat and power generation mode is more attractive due to higher efficiency +60%, better resources and fuel utilization, and the advantage of using a compact one box/single fuel approach for supplying all energy needs of the dwellings. While commercial fuel cell cogeneration applications are easy to adopt in combined mode of operation, due to the relatively stable base power/heat load throughout the day, the residential fuel cell cogeneration systems face a different environment with uneven load, usually two peaks in the morning and in the evening and the fact that the triple load: space, water and power occur at almost the same time. In most of the cases, the fuel cell system is not able to satisfy the triple demand and additional back up heater/burner is used. The developed ''soft computing" control strategy for FC integrated systems would be able to optimize the combined system operation while satisfying combination of demands. The simulation results showed that by employing a generic fuzzy logic control strategy the management of the power supply and thermal loads could be done appropriately in an optimal way, satisfying homeowners' power and comfort needs.
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911
Enumeration of clostridia in goat milk using an optimized membrane filtration technique.
Reindl, Anita; Dzieciol, Monika; Hein, Ingeborg; Wagner, Martin; Zangerl, Peter
2014-10-01
A membrane filtration technique developed for counting butyric acid bacteria in cow milk was further developed for analysis of goat milk. Reduction of the sample volume, prolongation of incubation time after addition of proteolytic enzyme and detergent, and a novel step of ultrasonic treatment during incubation allowed filtration of goat milk even in the case of somatic cell counts (SCC) exceeding 10(6)/mL. However, filterability was impaired in milk from goats in late lactation. In total, spore counts were assessed in 329 farm bulk goat milk samples. Membrane filtration technique counts were lower than numbers revealed by the classic most probable number technique. Thus, method-specific thresholds for milk to evaluate the risk of late blowing have to be set. As expected, the spore counts of milk samples from suppliers not feeding silage were significantly lower than the spore counts of milk samples from suppliers using silage feeds. Not only were counts different, the clostridial spore population also varied significantly. By using 16S rRNA gene PCR and gene sequencing, 342 strains from 15 clostridial species were identified. The most common Clostridium species were Clostridium tyrobutyricum (40.4%), Clostridium sporogenes (38.3%), Clostridium bifermentans (7.6%), and Clostridium perfringens (5.3%). The 2 most frequently occurring species C. tyrobutyricum and C. sporogenes accounted for 84.7% of the isolates derived from samples of suppliers feeding silage (n=288). In contrast, in samples from suppliers without silage feeding (n=55), these species were detected in only 45.5% of the isolates.
Beyth, Y.; Navot, D.; Lax, E.
1985-10-01
A simple technique is reported in which oil-soluble contrast media (OSCM) are used with hysterosalpingography to investigate infertility in women due to uterine and tubal pathology. The advantages of OSCM as compared with water-soluble contrast media (WSCM) are described. Complications caused by intravasation of the OSCM into lymph vessels and veins are avoided by clearing the media at the end of the procedure. This also results in the immediate spread of the contrast media in the pelvic cavity with the result that delayed radiographs become superfluous and the radiation dose to the genitals is reduced. 4 references, 2 figures.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.
NASA Technical Reports Server (NTRS)
1971-01-01
Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta; Kvaternik, Raymond G.
1991-01-01
A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.
Venkata Srikanth, Meka; Songa, Ambedkar Sunil; Nali, Sreenivasa Rao; Battu, Janaki Ram; Kolapalli, Venkata Ramana Murthy
2014-01-01
The objective of the present investigation was to study the applicability of thermal sintering technique for the development of gastric floating tablets of propranolol HCl. Formulations were prepared using four independent variables, namely (i) polymer quantity, (ii) sodium bicarbonate concentration, (iii) sintering temperature and (iv) sintering time. Floating lag time and t95 were taken as dependent variables. Tablets were prepared by the direct compression method and were evaluated for physicochemical properties, in vitro buoyancy and dissolution studies. From the drug release studies, it was observed that drug retarding property mainly depends upon the sintering temperature and time of exposure. The statistically optimized formulation (PTSso) was characterized by Fourier transform infrared spectroscopy and differential scanning calorimetry studies, and no significant chemical interaction between drug and polymer was observed. Optimized formulation was stable at accelerated conditions for a period of six months. PTSso was evaluated for in vivo buoyancy studies in humans for both fed and fasted states and found that gastric residence time of the floating tablets were enhanced by fed stage but not in fasted state. Optimized formulation PTSso and commercial formulation Ciplar LA 80 were subjected to bioavailability studies in healthy human volunteers by estimating pharmacokinetic parameters such as Cmax, Tmax, area under curve (AUC), elimination rate constant (Kel), biological half-life (t1/2) and mean residence time (MRT). There was a significant increase in the bioavailability of the propranolol HCl from PTSso formulation, which was evident from increased AUC levels and larger MRT values than Ciplar LA 80.
Cardone, J M; Revers, L F; Machado, R M; Bonatto, D; Brendel, M; Henriques, J A P
2006-02-01
Complementation analysis of the pso9-1 yeast mutant strain sensitive to photoactivated mono- and bifunctional psoralens, UV-light 254 nm, and nitrosoguanidine, with pso1 to pso8 mutants, confirmed that it contains a novel pso mutation. Molecular cloning via the reverse genetics complementation approach using a yeast genomic library suggested pso9-1 to be a mutant allele of the DNA damage checkpoint control gene MEC3. Non-complementation of several sensitivity phenotypes in pso9-1/mec3Delta diploids confirmed allelism. The pso9-1 mutant allele contains a -1 frameshift mutation (deletion of one A) at nucleotide position 802 (802delA), resulting in nine different amino acid residues from that point and a premature termination. This mutation affected the binding properties of Pso9-1p, abolishing its interactions with both Rad17p and Ddc1p. Further interaction assays employing mec3 constructions lacking the last 25 and 75 amino acid carboxyl termini were also not able to maintain stable interactions. Moreover, the pso9-1 mutant strain could no longer sense DNA damage since it continued in the cell cycle after 8-MOP + UVA treatment. Taken together, these observations allowed us to propose a model for checkpoint activation generated by photo-induced adducts. PMID:16202664
Jabbari, Keyvan; Azarmahd, Nazli; Babazade, Shadi; Amouheidari, Alireza
2013-01-01
Radiotherapy plays an essential role in the management of breast cancer. Three-dimensional conformal radiation therapy (3D-CRT) is applied based on 3D image information of anatomy of patients. In 3D-CRT for breast cancer one of the common techniques is tangential technique. In this project, various parameters of tangential and supraclavicular fields are optimized. This project has been done on computed tomography images of 100 patients in Isfahan Milad Hospital. All patients have been simulated and all the important organs have been contoured by radiation oncologist. Two techniques in supraclavicular region are evaluated including: 1-A single field (Anterior Posterior [AP]) with a dose of 200 cGy per fraction with 6 MV energy. This is a common technique. 2-Two parallel opposed fields (AP-Posterior Anterior [PA]). The dose of AP was 150 cGy with 6 MV energy and PA 50 cGy with 18 MV. In the second part of the project, the tangential fields has been optimized with change of normalization point in five points: (1) Isocenter (Confluence of rotation gantry axis and collimator axis) (2) Middle of thickest part of breast or middle of inter field distance (IFD) (3) Border between the lung and chest wall (4) Physician's choice (5) Between IFD and isocenter. Dose distributions have been compared for all patients in different methods of supraclavicular and tangential field. In parallel opposed fields average lung dose was 4% more than a single field and the maximum received heart dose was 21.5% less than a single field. The average dose of planning tumor volume (PTV) in method 2 is 2% more than method 1. In general AP-PA method because of a better coverage of PTV is suggested. In optimization of the tangential field all methods have similar coverage of PTV. Each method has spatial advantages and disadvantages. If it is important for the physician to reduce the dose received by the lung and heart, fifth method is suggested since in this method average and maximum received dose
Scheib, Stacey A; Tanner, Edward; Green, Isabel C; Fader, Amanda N
2014-01-01
The objectives of this review were to analyze the literature describing the benefits of minimally invasive gynecologic surgery in obese women, to examine the physiologic considerations associated with obesity, and to describe surgical techniques that will enable surgeons to perform laparoscopy and robotic surgery successfully in obese patients. The Medline database was reviewed for all articles published in the English language between 1993 and 2013 containing the search terms "gynecologic laparoscopy" "laparoscopy," "minimally invasive surgery and obesity," "obesity," and "robotic surgery." The incidence of obesity is increasing in the United States, and in particular morbid obesity in women. Obesity is associated with a wide range of comorbid conditions that may affect perioperative outcomes including hypertension, atherosclerosis, angina, obstructive sleep apnea, and diabetes mellitus. In obese patients, laparoscopy or robotic surgery, compared with laparotomy, is associated with a shorter hospital stay, less postoperative pain, and fewer wound complications. Specific intra-abdominal access and trocar positioning techniques, as well as anesthetic maneuvers, improve the likelihood of success of laparoscopy in women with central adiposity. Performing gynecologic laparoscopy in the morbidly obese is no longer rare. Increases in the heaviest weight categories involve changes in clinical practice patterns. With comprehensive and thoughtful preoperative and surgical planning, minimally invasive gynecologic surgery may be performed safely and is of particular benefit in obese patients. PMID:24100146
Optimization of a generalized radial-aortic transfer function using parametric techniques.
Akalanli, Cagla; Tay, David; Cameron, James D
2016-10-01
The central aortic blood pressure (cBP) waveform, which is different to that of peripheral locations, is a clinically important parameter for assessing cardiovascular function, however the gold standard for measuring cBP involves invasive catheter-based techniques. The difficulties associated with invasive measurements have given rise to the development of a variety of noninvasive methods. An increasingly applied method for the noninvasive derivation of cBP involves the application of transfer function (TF) techniques to a non-invasively measured radial blood pressure (BP) waveform. The purpose of the current study was to investigate the development of a general parametric model for determination of cBP from tonometrically transduced radial BP waveforms. The study utilized simultaneously measured invasive central aortic and noninvasive radial BP waveform measurements. Data sets were available from 92 subjects, a large cohort for a study of this nature. The output error (OE) model was empirically identified as the most appropriate model structure. A generalized model was developed using a pre-specified derivation cohort and then applied to a validation data set to estimate the recognized features of the cBP waveform. While our results showed that many relevant BP parameters could be derived within acceptable limits, the estimated augmentation index (AI) displayed only a weak correlation compared to the invasively measured value, indicating that any clinical diagnosis or interpretation based on estimated AI should be undertaken with caution. PMID:27591405
Teo, Stephanie M.; Ofori-Okai, Benjamin K.; Werley, Christopher A.; Nelson, Keith A.
2015-05-15
Multidimensional spectroscopy at visible and infrared frequencies has opened a window into the transfer of energy and quantum coherences at ultrafast time scales. For these measurements to be performed in a manageable amount of time, one spectral axis is typically recorded in a single laser shot. An analogous rapid-scanning capability for THz measurements will unlock the multidimensional toolkit in this frequency range. Here, we first review the merits of existing single-shot THz schemes and discuss their potential in multidimensional THz spectroscopy. We then introduce improved experimental designs and noise suppression techniques for the two most promising methods: frequency-to-time encoding with linear spectral interferometry and angle-to-time encoding with dual echelons. Both methods, each using electro-optic detection in the linear regime, were able to reproduce the THz temporal waveform acquired with a traditional scanning delay line. Although spectral interferometry had mediocre performance in terms of signal-to-noise, the dual echelon method was easily implemented and achieved the same level of signal-to-noise as the scanning delay line in only 4.5% of the laser pulses otherwise required (or 22 times faster). This reduction in acquisition time will compress day-long scans to hours and hence provides a practical technique for multidimensional THz measurements.
NASA Astrophysics Data System (ADS)
Teo, Stephanie M.; Ofori-Okai, Benjamin K.; Werley, Christopher A.; Nelson, Keith A.
2015-05-01
Multidimensional spectroscopy at visible and infrared frequencies has opened a window into the transfer of energy and quantum coherences at ultrafast time scales. For these measurements to be performed in a manageable amount of time, one spectral axis is typically recorded in a single laser shot. An analogous rapid-scanning capability for THz measurements will unlock the multidimensional toolkit in this frequency range. Here, we first review the merits of existing single-shot THz schemes and discuss their potential in multidimensional THz spectroscopy. We then introduce improved experimental designs and noise suppression techniques for the two most promising methods: frequency-to-time encoding with linear spectral interferometry and angle-to-time encoding with dual echelons. Both methods, each using electro-optic detection in the linear regime, were able to reproduce the THz temporal waveform acquired with a traditional scanning delay line. Although spectral interferometry had mediocre performance in terms of signal-to-noise, the dual echelon method was easily implemented and achieved the same level of signal-to-noise as the scanning delay line in only 4.5% of the laser pulses otherwise required (or 22 times faster). This reduction in acquisition time will compress day-long scans to hours and hence provides a practical technique for multidimensional THz measurements.
Bayesian network structure learning based on the chaotic particle swarm optimization algorithm.
Zhang, Q; Li, Z; Zhou, C J; Wei, X P
2013-01-01
The Bayesian network (BN) is a knowledge representation form, which has been proven to be valuable in the gene regulatory network reconstruction because of its capability of capturing causal relationships between genes. Learning BN structures from a database is a nondeterministic polynomial time (NP)-hard problem that remains one of the most exciting challenges in machine learning. Several heuristic searching techniques have been used to find better network structures. Among these algorithms, the classical K2 algorithm is the most successful. Nonetheless, the performance of the K2 algorithm is greatly affected by a prior ordering of input nodes. The proposed method in this paper is based on the chaotic particle swarm optimization (CPSO) and the K2 algorithm. Because the PSO algorithm completely entraps the local minimum in later evolutions, we combined the PSO algorithm with the chaos theory, which has the properties of ergodicity, randomness, and regularity. Experimental results show that the proposed method can improve the convergence rate of particles and identify networks more efficiently and accurately. PMID:24222226
Liu, Langechuan; Antonuk, Larry E. El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao
2014-06-15
Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an
Daling, P.S.; Indrebo, G.
1996-12-31
Several oil spill incidents during recent years have demonstrated that the physico-chemical properties of spilled oil and the effectiveness of available combat methods are, in addition to the prevailing environmental and weather conditions, key factors that determine the consequences of an oil spill. Pre-spill analyses of the feasibility and effectiveness of different response strategies, such as mechanical recovery and dispersants, for actual oils under various environmental conditions should therefore be an essential part of any oil spill contingency planning to optimize the overall {open_quotes}Net Environmental Benefit{close_quotes} of a combat operation. During the four-year research program ESCOST ({open_quotes}ESSO-SINTEF Coastal Oil Spill Treatment Program{close_quotes}), significant improvements have been made in oil spill combat methods and in tools for use in contingency planning and decision-making during oil spill operations. This paper will present an overview of the main findings obtained with respect to oil weathering and oil spill dispersant treatment.
Optimizing human reliability: Mock-up and simulation techniques in waste management
Caccamise, D.J.; Somers, C.S.; Sebok, A.L.
1992-10-01
With the new mission at Rocky Flats to decontaminate and decommission a 40-year old nuclear weapons production facility comes many interesting new challenges for human factors engineering. Because the goal at Rocky Flats is to transform the environment, the workforce that undertakes this mission will find themselves in a state of constant change, as they respond to ever-changing task demands in a constantly evolving work place. In order to achieve the flexibility necessary under these circumstances and still maintain control of human reliability issues that exist in a hazardous, radioactive work environment, Rocky Flats developed an Engineering Mock-up and Simulation Lab to plan, design, test, and train personnel for new tasks involving hazardous materials. This presentation will describe how this laboratory is used to develop equipment, tools, work processes, and procedures to optimize human reliability concerns in the operational environment. We will discuss a particular instance in which a glovebag, large enough to house two individuals, was developed at this laboratory to protect the workers as they cleaned fissile material from building ventilation duct systems.
Yao, T-T; Wang, L-K; Cheng, J-L; Hu, Y-Z; Zhao, J-H; Zhu, G-N
2015-03-01
A new approach employing a combination of pyrethroid and repellent is proposed to improve the protective efficacy of conventional pyrethroid-treated fabrics against mosquito vectors. In this context, the insecticidal and repellent efficacies of commonly used pyrethroids and repellents were evaluated by cone tests and arm-in-cage tests against Stegomyia albopicta (=Aedes albopictus) (Diptera: Culicidae). At concentrations of LD50 (estimated for pyrethroid) or ED50 (estimated for repellent), respectively, the knock-down effects of the pyrethroids or repellents were further compared. The results obtained indicated that deltamethrin and DEET were relatively more effective and thus these were selected for further study. Synergistic interaction was observed between deltamethrin and DEET at the ratios of 5 : 1, 2 : 1, 1 : 1 and 1 : 2 (but not 1 : 5). An optimal mixing ratio of 7 : 5 was then microencapsulated and adhered to fabrics using a fixing agent. Fabrics impregnated by microencapsulated mixtures gained extended washing durability compared with those treated with a conventional dipping method. Results indicated that this approach represents a promising method for the future impregnation of bednet, curtain and combat uniform materials. PMID:25429906
Xing, Changhu; Jensen, Colby; Folsom, Charles; Ban, Heng; Marshall, Douglas W.
2014-01-01
In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivitiesof the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation.
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
The use of Monte Carlo technique to optimize the dose distribution in total skin irradiation
NASA Astrophysics Data System (ADS)
Poli, M. E. R.; Pereira, S. A.; Yoriyaz, H.
2001-06-01
Cutaneous T-cell lymphoma (mycosis fungoides) is an indolent disease with a low percentage of cure. Total skin irradiation using an electron beam has become an efficient treatment of mycosis fungoides with curative intention, with success in almost 40% of the patients. In this work, we propose the use of a Monte Carlo technique to simulate the dose distribution in the patients during total skin irradiation treatments. Use was made of MCNP-4B, a well known and established code used to simulate transport of electrons, photons and neutrons through matter, especially in the area of reactor physics, and also finding increasing utility in medical physics. The goal of our work is to simulate different angles between each beam with a fixed treatment distance in order to obtain a uniform dose distribution in the patient.
Optimization of Stereotactic Radiotherapy Treatment Delivery Technique for Base-Of-Skull Meningiomas
Clark, Brenda G. Candish, Charles; Vollans, Emily; Gete, Ermias; Lee, Richard; Martin, Monty; Ma, Roy; McKenzie, Michael
2008-10-01
This study compares static conformal field (CF), intensity modulated radiotherapy (IMRT), and dynamic arcs (DA) for the stereotactic radiotherapy of base-of-skull meningiomas. Twenty-one cases of base-of-skull meningioma (median planning target volume [PTV] = 21.3 cm{sup 3}) previously treated with stereotactic radiotherapy were replanned with each technique. The plans were compared for Radiation Therapy Oncology Group conformity index (CI) and homogeneity index (HI), and doses to normal structures at 6 dose values from 50.4 Gy to 5.6 Gy. The mean CI was 1.75 (CF), 1.75 (DA), and 1.66 (IMRT) (p < 0.05 when comparing IMRT to either CF or DA plans). The CI (IMRT) was inversely proportional to the size of the PTV (Spearman's rho = -0.53, p = 0.01) and at PTV sizes above 25 cm{sup 3}, the CI (IMRT) was always superior to CI (DA) and CI (CF). At PTV sizes below 25 cm{sup 3}, there was no significant difference in CI between each technique. There was no significant difference in HI between plans. The total volume of normal tissue receiving 50.4, 44.8, and 5.6 Gy was significantly lower when comparing IMRT to CF and DA plans (p < 0.05). There was significantly improved dose sparing for the brain stem and ipsilateral temporal lobe with IMRT but no significant difference for the optic chiasm or pituitary gland. These results demonstrate that stereotactic IMRT should be considered to treat base-of-skull meningiomas with a PTV larger than 25 cm{sup 3}, due to improved conformity and normal tissue sparing, in particular for the brain stem and ipsilateral temporal lobe.
Deters, Katherine A.; Brown, Richard S.; Boyd, James W.; Eppard, M. B.; Seaburg, Adam
2012-01-02
The size reduction of acoustic transmitters has led to a reduction in the length of incision needed to implant a transmitter. Smaller suture knot profiles and fewer sutures may be adequate for closing an incision used to surgically implant an acoustic microtransmitter. As a result, faster surgery times and reduced tissue trauma could lead to increased survival and decreased infection for implanted fish. The objective of this study was to assess the effects of five suturing techniques on mortality, tag and suture retention, incision openness, ulceration, and redness in juvenile Chinook salmon Oncorhynchus tshawytscha implanted with acoustic microtransmitters. Suturing was performed by three surgeons, and study fish were held at two water temperatures (12°C and 17°C). Mortality was low and tag retention was high for all treatments on all examination days (7, 14, 21, and 28 days post-surgery). Because there was surgeon variation in suture retention among treatments, further analyses included only the one surgeon who received feedback training in all suturing techniques. Incision openness and tissue redness did not differ among treatments. The only difference observed among treatments was in tissue ulceration. Incisions closed with a horizontal mattress pattern had more ulceration than other treatments among fish held for 28 days at 17°C. Results from this study suggest that one simple interrupted 1 × 1 × 1 × 1 suture is adequate for closing incisions on fish under most circumstances. However, in dynamic environments, two simple interrupted 1 × 1 × 1 × 1 sutures should provide adequate incision closure. Reducing bias in survival and behavior tagging studies is important when making comparisons to the migrating salmon population. Therefore, by minimizing the effects of tagging on juvenile salmon (reduced tissue trauma and reduced surgery time), researchers can more accurately estimate survival and behavior.
NASA Astrophysics Data System (ADS)
Du, Chao; Ming, Pingwen; Hou, Ming; Fu, Jie; Fu, Yunfeng; Luo, Xiaokuan; Shen, Qiang; Shao, Zhigang; Yi, Baolian
Vacuum resin impregnation method has been used to prepare polymer/compressed expanded graphite (CEG) composite bipolar plates for proton exchange membrane fuel cells (PEMFCs). In this research, three different preparation techniques of the epoxy/CEG composite bipolar plate (Compression-Impregnation method, Impregnation-Compression method and Compression-Impregnation-Compression method) are optimized by the physical properties of the composite bipolar plates. The optimum conditions and the advantages/disadvantages of the different techniques are discussed respectively. Although having different characteristics, bipolar plates obtained by these three techniques can all meet the demands of PEMFC bipolar plates as long as the optimum conditions are selected. The Compression-Impregnation-Compression method is shown to be the optimum method because of the outstanding properties of the bipolar plates. Besides, the cell assembled with these optimum composite bipolar plates shows excellent stability after 200 h durability testing. Therefore the composite prepared by vacuum resin impregnation method is a promising candidate for bipolar plate materials in PEMFCs.
Douroumis, Dionysios; Scheler, Stefan; Fahr, Alfred
2008-02-01
An innovative methodology has been used for the formulation development of Cyclosporine A (CyA) nanoparticles. In the present study the static mixer technique, which is a novel method for producing nanoparticles, was employed. The formulation optimum was calculated by the modified Shepard's method (MSM), an advanced data analysis technique not adopted so far in pharmaceutical applications. Controlled precipitation was achieved injecting the organic CyA solution rapidly into an aqueous protective solution by means of a static mixer. Furthermore the computer based MSM was implemented for data analysis, visualization, and application development. For the optimization studies, the gelatin/lipoid S75 amounts and the organic/aqueous phase were selected as independent variables while the obtained particle size as a dependent variable. The optimum predicted formulation was characterized by cryo-TEM microscopy, particle size measurements, stability, and in vitro release. The produced nanoparticles contain drug in amorphous state and decreased amounts of stabilizing agents. The dissolution rate of the lyophilized powder was significantly enhanced in the first 2 h. MSM was proved capable to interpret in detail and to predict with high accuracy the optimum formulation. The mixer technique was proved capable to develop CyA nanoparticulate formulations. PMID:17853428
Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.
Ghosh, Subhojit; Samanta, Susovon
2014-07-01
This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter.
Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.
Ghosh, Subhojit; Samanta, Susovon
2014-07-01
This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter. PMID:24768082
Optimized energy landscape exploration using the ab initio based activation-relaxation technique
NASA Astrophysics Data System (ADS)
Machado-Charry, Eduardo; Béland, Laurent Karim; Caliste, Damien; Genovese, Luigi; Deutsch, Thierry; Mousseau, Normand; Pochet, Pascal
2011-07-01
Unbiased open-ended methods for finding transition states are powerful tools to understand diffusion and relaxation mechanisms associated with defect diffusion, growth processes, and catalysis. They have been little used, however, in conjunction with ab initio packages as these algorithms demanded large computational effort to generate even a single event. Here, we revisit the activation-relaxation technique (ART nouveau) and introduce a two-step convergence to the saddle point, combining the previously used Lanczós algorithm with the direct inversion in interactive subspace scheme. This combination makes it possible to generate events (from an initial minimum through a saddle point up to a final minimum) in a systematic fashion with a net 300-700 force evaluations per successful event. ART nouveau is coupled with BigDFT, a Kohn-Sham density functional theory (DFT) electronic structure code using a wavelet basis set with excellent efficiency on parallel computation, and applied to study the potential energy surface of C20 clusters, vacancy diffusion in bulk silicon, and reconstruction of the 4H-SiC surface.
de la Torre, M L; Grande, J A; Aroba, J; Andujar, J M
2005-11-01
A high level of price support has favoured intensive agriculture and an increasing use of fertilisers and pesticides. This has resulted in the pollution of water and soils and damage to certain eco-systems. The target relationship that must be established between agriculture and environment can be called "sustainable agriculture". In this work we aim at relating strawberry total yield with nitrate concentration in water at different soil depths. To achieve this objective, we have used the Predictive Fuzzy Rules Generator (PreFuRGe) tool, based on fuzzy logic and data mining, by means of which the dose that allows a balance between yield and environmental damage minimization can be determined. This determination is quite simple and is done directly from the obtained charts. This technique can be used in other types of crops permitting one to determine in a precise way at which depth the appropriate dose of nitrate fertilizer must be correctly applied, on the one hand providing the maximum yield but, on the other hand, with the minimum loss of nitrates that leachate through the saturated zone polluting aquifers.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Hybrid optimization techniques for the workshift and rest assignment of nursing personnel.
Valouxis, C; Housos, E
2000-10-01
In this paper, a detailed model and an efficient solution methodology for the monthly workshift and rest assignment of hospital nursing personnel is presented. A model that satisfies the rules of a typical hospital environment based both on published research data and on local hospital requirements is designed. A hybrid methodology that utilizes the strengths of operations research and artificial intelligence was used for the solution of the problem. In particular, an approximate integer linear programming (ILP) model is firstly solved and its solution is further improved using local search techniques. Finally, a tabu search strategy that uses as its neighborhood the solution space that the local heuristics define is presented. The use of heuristics is required because one of the main user requirements involving the user preference for specific workstretch patterns is not, for efficiency reasons, explicitly modeled in the ILP. In addition, for comparison and evaluation purposes the CLP based ILOG solver is also used to solve the same problem. The inferior computational results obtained with the ILOG solver do verify the speed and efficiency of the hybrid solution approach suggested in this paper. Extensive computational results are presented together with a detailed discussion on the quality, the computational efficiency and the operational acceptability of the solutions. PMID:10936751
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
GPU-Based Optimal Control Techniques for Resistive Wall Mode Control on DIII-D
NASA Astrophysics Data System (ADS)
Clement, M.; Navratil, G. A.; Hanson, J. M.; Strait, E. J.
2014-10-01
The DIII-D tokamak can excite strong, locked or nearly locked kink modes whose rotation frequencies do not evolve quickly and are slow compared to their growth rates. To control such modes, DIII-D plans to implement a Graphical Processing Unit (GPU) based feedback control system in a low-latency architecture based on system developed on the HBT-EP tokamak. Up to 128 local magnetic sensors will be used to extrapolate the state of the rotating kink mode, which will be used by the feedback algorithm to calculate the required currents for the internal and/or external control coils. Offline techniques for resolving the mode structure of the resistive wall mode (RWM) will be presented and compared along with the proposed GPU implementation scheme and potential real-time estimation algorithms for RWM feedback. Work supported by the US Department of Energy under DE-FG02-07ER54917, DE-FG02-04ER54761, and DE-FC02-04ER54698.
López-Caraballo, C H; Lazzús, J A; Salfate, I; Rojas, P; Rivera, M; Palma-Chilla, L
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σ(N)) from 0.01 to 0.1. PMID:26351449
López-Caraballo, C. H.; Lazzús, J. A.; Salfate, I.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN) from 0.01 to 0.1. PMID:26351449
Design optimization method for Francis turbine
NASA Astrophysics Data System (ADS)
Kawajiri, H.; Enomoto, Y.; Kurosawa, S.
2014-03-01
This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-11-01
Wavelet neural networks (WNNs) are a new class of neural networks (NNs) that has been developed using a combined method of multi-layer artificial neural networks and wavelet analysis (WA). In this paper, WNNs is used for modeling and prediction of total electron content (TEC) of ionosphere with high spatial and temporal resolution. Generally, back-propagation (BP) algorithm is used to train the neural network. While this algorithm proves to be very effective and robust in training many types of network structures, it suffers from certain disadvantages such as easy entrapment in a local minimum and slow convergence. To improve the performance of WNN in training step, the adjustment of network weights using particle swarm optimization (PSO) was proposed. The results obtained in this paper were compared with standard NN (SNN) by BP training algorithm (SNN-BP), SNN by PSO training algorithm (SNN-PSO) and WNN by BP training algorithm (WNN-BP). For numerical experiments, observations collected at 36 GPS stations in 5 days of 2012 from Iranian permanent GPS network (IPGN) are used. The average minimum relative errors in 5 test stations for WNN-PSO, WNN-BP, SNN-BP and SNN-PSO compared with GPS TEC are 10.59%, 12.85%, 13.18%, 13.75% and average maximum relative errors are 14.70%, 17.30%, 18.53% and 20.83%, respectively. Comparison of diurnal predicted TEC values from the WNN-PSO, SNN-BP, SNN-PSO and WNN-BP models with GPS TEC revealed that the WNN-PSO provides more accurate predictions than the other methods in the test area.
Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2012-01-01
For miniaturization of electronics systems, power consumption plays a key role in the realm of constraints. Considering the very large scale integration (VLSI) design aspect, as transistor feature size is decreased to 50 nm and below, there is sizable increase in the number of transistors as more functional building blocks are embedded in the same chip. However, the consequent increase in power consumption (dynamic and leakage) will serve as a key constraint to inhibit the advantages of transistor feature size reduction. Power consumption can be reduced by minimizing the voltage supply (for dynamic power consumption) and/or increasing threshold voltage (V(sub th), for reducing leakage power). When the feature size of the transistor is reduced, supply voltage (V(sub dd)) and threshold voltage (V(sub th)) are also reduced accordingly; then, the leakage current becomes a bigger factor of the total power consumption. To maintain low power consumption, operation of electronics at sub-threshold levels can be a potentially strong contender; however, there are two obstacles to be faced: more leakage current per transistor will cause more leakage power consumption, and slow response time when the transistor is operated in weak inversion region. To enable low power consumption and yet obtain high performance, the CMOS (complementary metal oxide semiconductor) transistor as a basic element is viewed and controlled as a four-terminal device: source, drain, gate, and body, as differentiated from the traditional approach with three terminals: i.e., source and body, drain, and gate. This technique features multiple voltage sources to supply the dynamic control, and uses dynamic control to enable low-threshold voltage when the channel (N or P) is active, for speed response enhancement and high threshold voltage, and when the transistor channel (N or P) is inactive, to reduce the leakage current for low-leakage power consumption.
Dias, Francilena Maria Campos Santos; Pinzan-Vercelino, Célia Regina Maio; Tavares, Rudys Rodolfo de Jesus; Gurgel, Júlio de Araújo; Bramante, Fausto Silva; Fialho, Melissa Nogueira Proença
2015-01-01
OBJECTIVE: To compare shear bond strength of different direct bonding techniques of orthodontic brackets to acrylic resin surfaces. METHODS: The sample comprised 64 discs of chemically activated acrylic resin (CAAR) randomly divided into four groups: discs in group 1 were bonded by means of light-cured composite resin (conventional adhesive); discs in group 2 had surfaces roughened with a diamond bur followed by conventional direct bonding by means of light-cured composite resin; discs in group 3 were bonded by means of CAAR (alternative adhesive); and discs in group 4 had surfaces roughened with a diamond bur followed by direct bonding by means of CAAR. Shear bond strength values were determined after 24 hours by means of a universal testing machine at a speed of 0.5 mm/min, and compared by analysis of variance followed by post-hoc Tukey test. Adhesive remnant index (ARI) was measured and compared among groups by means of Kruskal-Wallis and Dunn tests. RESULTS: Groups 3 and 4 had significantly greater shear bond strength values in comparison to groups 1 and 2. Groups 3 and 4 yielded similar results. Group 2 showed better results when compared to group 1. In ARI analyses, groups 1 and 2 predominantly exhibited a score equal to 0, whereas groups 3 and 4 predominantly exhibited a score equal to 3. CONCLUSIONS: Direct bonding of brackets to acrylic resin surfaces using CAAR yielded better results than light-cured composite resin. Surface preparation with diamond bur only increased shear bond strength in group 2. PMID:26352846
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
Roundness error assessment based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Zhao, J. W.; Chen, G. Q.
2005-01-01
Roundness error assessment is always a nonlinear optimization problem without constraints. The method of particle swarm optimization (PSO) is proposed to evaluate the roundness error. PSO is an evolution algorithm derived from the behavior of preying birds. PSO regards each feasible solution as a particle (point in n-dimensional space). It initializes a swarm of random particles in the feasible region. All particles always trace two particles in which one is the best position itself; another is the best position of all particles. According to the inertia weight and two best particles, all particles update their positions and velocities according to the fitness function. After iterations, it converges to an optimized solution. The reciprocal of the error assessment objective function is adopted as the fitness. In this paper the calculating procedures with PSO are given. Finally, an assessment example is used to verify this method. The results show that the method proposed provides a new way for other form and position error assessment because it can always converge to the global optimal solution.
Elsayed, Ibrahim; Abdelbary, Aly Ahmed; Elshafeey, Ahmed Hassen
2014-01-01
Context Diacerein (DCN) has low aqueous solubility (3.197 mg/L) and, consequently, low oral bioavailability (35%–56%). To increase both the solubility and dissolution rate of DCN while maintaining its crystalline nature, high pressure homogenization was used but with only a few homogenization cycles preceded by a simple bottom-up technique. Methods The nanosuspensions of DCN were prepared using a combined bottom-up/top-down technique. Different surfactants – polyvinyl alcohol, sodium deoxycholate, and sodium dodecyl sulfate – with different concentrations were used for the stabilization of the nanosuspensions. Full factorial experimental design was employed to investigate the influence of formulation variables on nanosuspension characteristics using Design-Expert® Software. Particle size (PS), zeta potential, saturation solubility, in vitro dissolution, and drug crystallinity were studied. Moreover, the in vivo performance of the optimized formula was assessed by bioavailability determination in healthy human volunteers. Results The concentration of surfactant had a significant effect on both the PS and polydispersity index values. The 1% surfactant concentration showed the lowest PS and polydispersity index values compared with other concentrations. Both type and concentration of surfactant had significant effects on the zeta potential. Formula F8 (containing 1% sodium deoxycholate) and Formula F12 (containing 1% sodium dodecyl sulfate) had the highest desirability values (0.952 and 0.927, respectively). Hence, they were selected for further characterization. The saturated solubility and mean dissolution time, in the case of F8 and F12, were significantly higher than the coarse drug powder. Techniques utilized in the nanocrystals’ preparation had no effect on DCN crystalline state. The selected formula (F12) showed a higher bioavailability compared to the reference market product with relative bioavailability of 131.4%. Conclusion The saturation
NASA Astrophysics Data System (ADS)
Avila, Marco A.
2015-02-01
Laser range finders (LRF) and target designators (TD) for military applications usually have stringent environmental requirements for optimal performance. Current technology and system architectures need LRF and TD lasers to function in more than one color (near IR and eye safe wavelengths) for multiple ground and airborne applications. In addition, these kind of lasers need to be packaged inside a small space for portability. It is for these reasons that a folded crossed porro-polarization- out coupled resonators is usually the chosen geometry. This work will explore polarization techniques to design a laser resonator cavity that works perfectly for more than one color, sometimes without the need of actual birefringence components (i.e waveplates) to achieve the goal of a stable laser resonator.
Liu, Qing; Wang, Tai-Yong; Yang, Xiu-Ping; Li, Kun; Gao, Li-Lan; Zhang, Chun-Qiu; Guo, Yue-Hong
2014-04-01
The unconfined compression and tension experiments of the intervertebral disc were conducted by applying an optimized digital image correlation technique, and the internal strain distribution was analysed for the disc. It was found that the axial strain values of different positions increased obviously with the increase in loads, while inner annulus fibrosus and posterior annulus fibrosus experienced higher axial strains than the outer annulus fibrosus and anterior annulus fibrosus. Deep annulus fibrosus exhibited higher compressive and tensile axial strains than superficial annulus fibrosus for the anterior region, while there was an opposite result for the posterior region. It was noted that all samples demonstrated a nonlinear stress-strain profile in the process of deforming, and an elastic region was shown once the sample was deformed beyond its toe region.
NASA Astrophysics Data System (ADS)
Jeykrishnan, J.; Vijaya Ramnath, B.; Akilesh, S.; Pradeep Kumar, R. P.
2016-09-01
In the field of manufacturing sectors, electric discharge machining (EDM) is widely used because of its unique machining characteristics and high meticulousness which can't be done by other traditional machines. The purpose of this paper is to analyse the optimum machining parameter, to curtail the machining time with respect to high material removal rate (MRR) and low tool wear rate (TWR) by varying the parameters like current, pulse on time (Ton) and pulse off time (Toff). By conducting several dry runs using Taguchi technique of L9 orthogonal array (OA), optimized parameters were found using analysis of variance (ANOVA) and the error percentage can be validated and parameter contribution for MRR and TWR were found.
Fang, Ching; Liu, Ju-Tsung; Lin, Cheng-Huang
2002-07-25
The separation and on-line concentrations of lysergic acid diethylamide (LSD), iso-lysergic acid diethylamide (iso-LSD) and lysergic acid N,N-methylpropylamide (LAMPA) in human urine were investigated by capillary electrophoresis-fluorescence spectroscopy using sodium dodecyl sulfate (SDS) as an anionic surfactant. A number of parameters such as buffer pH, SDS concentration, Brij-30 concentration and the content of organic solvent used in separation, were optimized. The techniques of sweeping-micellar electrokinetic chromatography (sweeping-MEKC) and cation-selective exhaustive injection-sweep-micellar electrokinetic chromatography (CSEI-sweep-MEKC) were used for determining on-line concentrations. The advantages and disadvantages of this procedure with respect to sensitivity, precision and simplicity are discussed and compared.
Application of particle swarm optimization to interpret Rayleigh wave dispersion curves
NASA Astrophysics Data System (ADS)
Song, Xianhai; Tang, Li; Lv, Xiaochun; Fang, Hongping; Gu, Hanming
2012-09-01
Rayleigh waves have been used increasingly as an appealing tool to obtain near-surface shear (S)-wave velocity profiles. However, inversion of Rayleigh wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this study, we proposed and tested a new Rayleigh wave dispersion curve inversion scheme based on particle swarm optimization (PSO). PSO is a global optimization strategy that simulates the social behavior observed in a flock (swarm) of birds searching for food. A simple search strategy in PSO guides the algorithm toward the best solution through constant updating of the cognitive knowledge and social behavior of the particles in the swarm. To evaluate calculation efficiency and stability of PSO to inversion of surface wave data, we first inverted three noise-free and three noise-corrupted synthetic data sets. Then, we made a comparative analysis with genetic algorithms (GA) and a Monte Carlo (MC) sampler and reconstructed a histogram of model parameters sampled on a low-misfit region less than 15% relative error to further investigate the performance of the proposed inverse procedure. Finally, we inverted a real-world example from a waste disposal site in NE Italy to examine the applicability of PSO on Rayleigh wave dispersion curves. Results from both synthetic and field data demonstrate that particle swarm optimization can be used for quantitative interpretation of Rayleigh wave dispersion curves. PSO seems superior to GA and MC in terms of both reliability and computational efforts. The great advantages of PSO are fast in locating the low misfit region and easy to implement. Also there are only three parameters to tune (inertia weight or constriction factor, local and global acceleration constants). Theoretical results exist to explain how to tune these parameters.
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less
Dumidu Wijayasekara; Milos Manic; Piyush Sabharwall; Vivek Utgikar
2011-07-01
Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or overlearning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing error achieved was in the order of magnitude of within 10{sup -5} - 10{sup -3}. It was also show that the absolute error achieved by EBaLM-OTR was an order of magnitude better than the lowest error achieved by EBaLM-THP.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Schulze, Anja; Römmelt, Horst; Ehrenstein, Vera; van Strien, Rob; Praml, Georg; Küchenhoff, Helmut; Nowak, Dennis; Radon, Katja
2011-01-01
Potential adverse health effects of concentrated animal feeding operations (CAFOs), which were also shown in the authors' Lower Saxony Lung Study, are of public concern. The authors aimed to investigate pulmonary health effect of neighboring residents assessed using optimized estimation technique. Annual ammonia emission was measured to assess the emission from CAFO and from surrounding fields. Location of sampling points was optimized using cluster analysis. Individual exposure of 457 nonfarm subjects was interpolated by weighting method. Mean estimated annual ammonia levels varied between 16 and 24 μg/m³. Higher exposed participants were more likely to be sensitized against ubiquitous allergens as compared to lower exposed subjects (adjusted odds ratio [OR] 4.2; 95% confidence interval [CI] 1.2-13.2). In addition, they showed a significantly lower forced expiratory volume in 1 second (FEV₁) (adjusted mean difference in % of predicted -8%; 95% CI -13% to -3%). The authors' previous findings that CAFOs may contribute to burden of respiratory diseases were confirmed by this study. PMID:21864103
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-12-15
was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
Choi, Soo-Young; Choi, Ho-Jung; Lee, Ki-Ja; Lee, Young-Won
2015-09-01
To establish a protocol for a multi-phase computed tomography (CT) of the canine pancreas using the bolus-tracking technique, dynamic scan and multi-phase CT were performed in six normal beagle dogs. The dynamic scan was performed for 60 sec at 1-sec intervals after the injection (4 ml/sec) of a contrast medium, and intervals from aortic enhancement appearance to aortic, pancreatic parenchymal and portal vein peaks were measured. The multi-phase CT with 3 phases was performed three times using a bolus-tracking technique. Scan delays were 0, 15 and 30 in first multi-phase scan; 5, 20 and 35 in second multi-phase scan; and 10, 25 and 40 sec in third multi-phase scan, respectively. Attenuation values and contrast enhancement pattern were analyzed from the aorta, pancreas and portal vein. The intervals from aortic enhancement appearance to aortic, pancreatic parenchymal and portal vein peaks were 3.8 ± 0.7, 8.7 ± 0.9 and 13.3 ± 1.5 sec, respectively. The maximum attenuation values of the aorta, pancreatic parenchyma and portal vein were present at scan sections with no scan delay, a 5-sec delay and a 10-sec delay, respectively. When a multi-phase CT of the canine pancreas is triggered at aortic enhancement appearance using a bolus-tracking technique, the recommended optimal delay times of the arterial and pancreatic parenchymal phases are no scan delay and 5 sec, respectively. PMID:25843155
Applications of parallel global optimization to mechanics problems
NASA Astrophysics Data System (ADS)
Schutte, Jaco Francois
Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.
Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization
Schutte, Jaco F.; Koh, Byung; Reinbolt, Jeffrey A.; Haftka, Raphael T.; George, Alan D.; Fregly, Benjamin J.
2006-01-01
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353