CPG Network Optimization for a Biomimetic Robotic Fish via PSO.
Yu, Junzhi; Wu, Zhengxing; Wang, Ming; Tan, Min
2016-09-01
In this brief, we investigate the parameter optimization issue of a central pattern generator (CPG) network governed forward and backward swimming for a fully untethered, multijoint biomimetic robotic fish. Considering that the CPG parameters are tightly linked to the propulsive performance of the robotic fish, we propose a method for determination of relatively optimized control parameters. Within the framework of evolutionary computation, we use a combination of dynamic model and particle swarm optimization (PSO) algorithm to seek the CPG characteristic parameters for an enhanced performance. The PSO-based optimization scheme is validated with extensive experiments conducted on the actual robotic fish. Noticeably, the optimized results are shown to be superior to previously reported forward and backward swimming speeds. PMID:26259223
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2016-06-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed (N), feed rate (f) and depth of cut (d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3-5-1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
An approach for reliability analysis of industrial systems using PSO and IFS technique.
Garg, Harish; Rani, Monica
2013-11-01
The main objective of this paper is to present a technique for computing the membership functions of the intuitionistic fuzzy set (IFS) by utilizing imprecise, uncertain and vague data. In literature so far, membership functions of IFS are computed via using fuzzy arithmetic operations within collected data and hence contain a wide range of uncertainties. Thus it is necessary for optimizing these spread by formulating a nonlinear optimization problem through ordinary arithmetic operations instead of fuzzy operations. Particle swarm optimization (PSO) has been used for constructing their membership functions. Sensitivity as well as performance analysis has also been conducted for finding the critical component of the system. Finally the computed results are compared with existing results. The suggested framework has been illustrated with the help of a case. PMID:23867122
PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.
Liu, Xiaoyong; Fu, Hui
2014-01-01
Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms. PMID:24971382
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models.
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
NASA Astrophysics Data System (ADS)
Handayani, D.; Nuraini, N.; Tse, O.; Saragih, R.; Naiborhu, J.
2016-04-01
PSO is a computational optimization method motivated by the social behavior of organisms like bird flocking, fish schooling and human social relations. PSO is one of the most important swarm intelligence algorithms. In this study, we analyze the convergence of PSO when it is applied to with-in host dengue infection treatment model simulation in our early research. We used PSO method to construct the initial adjoin equation and to solve a control problem. Its properties of control input on the continuity of objective function and ability of adapting to the dynamic environment made us have to analyze the convergence of PSO. With the convergence analysis of PSO we will have some parameters that ensure the convergence result of numerical simulations on this model using PSO.
NASA Astrophysics Data System (ADS)
Tian, Shu; Zhao, Min
2013-03-01
To solve the difficult problem that exists in the location of single-phase ground fault for coal mine underground distribution network, a fault location method using RBF network optimized by improved PSO algorithm based on the mapping relationship between wavelet packet transform modulus maxima of specific frequency bands transient state zero sequence current in the fault line and fault point position is presented. The simulation analysis results in the cases of different transition resistances and fault distances show that the RBF network optimized by improved PSO algorithm can obtain accurate and reliable fault location results, and the fault location perfor- mance is better than traditional RBF network.
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective. PMID:25152929
NASA Astrophysics Data System (ADS)
Rambabu, C.; Obulesu, Y. P.; Saibabu, Ch.
2014-07-01
This work presents particle swarm optimization (PSO) based method to solve the optimal power flow in power systems incorporating flexible AC transmission systems controllers such as thyristor controlled phase shifter, thyristor controlled series compensator and unified power flow controller for security enhancement under single network contingencies. A fuzzy contingency ranking method is used in this paper and observed that it effectively eliminates the masking effect when compared with other methods of contingency ranking. The fuzzy based network composite overall severity index is used as an objective to be minimized to improve the security of the power system. The proposed optimization process with PSO is presented with case study example using IEEE 30-bus test system to demonstrate its applicability. The results are presented to show the feasibility and potential of this new approach.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective. PMID:25152929
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption. PMID:26881273
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network
Vimalarani, C.; Subramanian, R.; Sivanandam, S. N.
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption. PMID:26881273
Prediction of O-glycosylation Sites Using Random Forest and GA-Tuned PSO Technique
Hassan, Hebatallah; Badr, Amr; Abdelhalim, MB
2015-01-01
O-glycosylation is one of the main types of the mammalian protein glycosylation; it occurs on the particular site of serine (S) or threonine (T). Several O-glycosylation site predictors have been developed. However, a need to get even better prediction tools remains. One challenge in training the classifiers is that the available datasets are highly imbalanced, which makes the classification accuracy for the minority class to become unsatisfactory. In our previous work, we have proposed a new classification approach, which is based on particle swarm optimization (PSO) and random forest (RF); this approach has considered the imbalanced dataset problem. The PSO parameters setting in the training process impacts the classification accuracy. Thus, in this paper, we perform parameters optimization for the PSO algorithm, based on genetic algorithm, in order to increase the classification accuracy. Our proposed genetic algorithm-based approach has shown better performance in terms of area under the receiver operating characteristic curve against existing predictors. In addition, we implemented a glycosylation predictor tool based on that approach, and we demonstrated that this tool could successfully identify candidate glycosylation sites in case study protein. PMID:26244014
Prediction of O-glycosylation Sites Using Random Forest and GA-Tuned PSO Technique.
Hassan, Hebatallah; Badr, Amr; Abdelhalim, M B
2015-01-01
O-glycosylation is one of the main types of the mammalian protein glycosylation; it occurs on the particular site of serine (S) or threonine (T). Several O-glycosylation site predictors have been developed. However, a need to get even better prediction tools remains. One challenge in training the classifiers is that the available datasets are highly imbalanced, which makes the classification accuracy for the minority class to become unsatisfactory. In our previous work, we have proposed a new classification approach, which is based on particle swarm optimization (PSO) and random forest (RF); this approach has considered the imbalanced dataset problem. The PSO parameters setting in the training process impacts the classification accuracy. Thus, in this paper, we perform parameters optimization for the PSO algorithm, based on genetic algorithm, in order to increase the classification accuracy. Our proposed genetic algorithm-based approach has shown better performance in terms of area under the receiver operating characteristic curve against existing predictors. In addition, we implemented a glycosylation predictor tool based on that approach, and we demonstrated that this tool could successfully identify candidate glycosylation sites in case study protein. PMID:26244014
Trajectory planning of free-floating space robot using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Walter, Ulrich
2015-07-01
This paper investigates the application of Particle Swarm Optimization (PSO) strategy to trajectory planning of the kinematically redundant space robot in free-floating mode. Due to the path dependent dynamic singularities, the volume of available workspace of the space robot is limited and enormous joint velocities are required when such singularities are met. In order to overcome this effect, the direct kinematics equations in conjunction with PSO are employed for trajectory planning of free-floating space robot. The joint trajectories are parametrized with the Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) redundant manipulator mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshikazu
This paper compares particle swarm optimization (PSO) techniques for a reactive power allocation planning problem in power systems. The problem can be formulated as a mixed-integer nonlinear optimization problem (MINLP). The PSO based methods determines a reactive power allocation strategy with continuous and discrete state variables such as automatic voltage regulator (AVR) operating values of electric power generators, tap positions of on-load tap changer (OLTC) of transformers, and the number of reactive power compensation equipment. Namely, this paper investigates applicability of PSO techniques to one of the practical MINLPs in power systems. Four variations of PSO: PSO with inertia weight approach (IWA), PSO with constriction factor approach (CFA), hybrid particle swarm optimization (HPSO) with IWA, and HPSO with CFA are compared. The four methods are applied to the standard IEEE14 bus system and a practical 112 bus system.
Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes
NASA Astrophysics Data System (ADS)
Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.
2016-03-01
One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.
Improving Cooperative PSO using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Afsahi, Zahra; Meybodi, Mohammadreza
PSO is a population-based technique for optimization, which simulates the social behaviour of the fish schooling or bird flocking. Two significant weaknesses of this method are: first, falling into local optimum and second, the curse of dimensionality. In this work we present the FCPSO-H to overcome these weaknesses. Our approach was implemented in the cooperative PSO, which employs fuzzy logic to control the acceleration coefficients in velocity equation of each particle. The proposed approach is validated by function optimization problem form the standard literature simulation result indicates that the approach is highly competitive specifically in its better general convergence performance.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
NASA Astrophysics Data System (ADS)
Zhang, Enlai; Hou, Liang; Shen, Chao; Shi, Yingliang; Zhang, Yaxiang
2016-01-01
To better solve the complex non-linear problem between the subjective sound quality evaluation results and objective psychoacoustics parameters, a method for the prediction of the sound quality is put forward by using a back propagation neural network (BPNN) based on particle swarm optimization (PSO), which is optimizing the initial weights and thresholds of BP network neurons through the PSO. In order to verify the effectiveness and accuracy of this approach, the noise signals of the B-Class vehicles from the idle speed to 120 km h-1 measured by the artificial head, are taken as a target. In addition, this paper describes a subjective evaluation experiment on the sound quality annoyance inside the vehicles through a grade evaluation method, by which the annoyance of each sample is obtained. With the use of Artemis software, the main objective psychoacoustic parameters of each noise sample are calculated. These parameters include loudness, sharpness, roughness, fluctuation, tonality, articulation index (AI) and A-weighted sound pressure level. Furthermore, three evaluation models with the same artificial neural network (ANN) structure are built: the standard BPNN model, the genetic algorithm-back-propagation neural network (GA-BPNN) model and the PSO-back-propagation neural network (PSO-BPNN) model. After the network training and the evaluation prediction on the three models’ network based on experimental data, it proves that the PSO-BPNN method can achieve convergence more quickly and improve the prediction accuracy of sound quality, which can further lay a foundation for the control of the sound quality inside vehicles.
NASA Astrophysics Data System (ADS)
Astuty; Haryono, T.
2016-04-01
Transmission expansion planning (TEP) is one of the issue that have to be faced caused by addition of large scale power generation into the existing power system. Optimization need to be conducted to get optimal solution technically and economically. Several mathematic methods have been applied to provide optimal allocation of new transmission line such us genetic algorithm, particle swarm optimization and tabu search. This paper proposed novel binary particle swarm optimization (NBPSO) to determine which transmission line should be added to the existing power system. There are two scenerios in this simulation. First, considering transmission power losses and the second is regardless transmission power losses. NBPSO method successfully obtain optimal solution in short computation time. Compare to the first scenario, the number of new line in second scenario which regardless power losses is less but produces high power losses that cause the cost becoming extremely expensive.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
PSO-based multiobjective optimization with dynamic population size and adaptive local archives.
Leong, Wen-Fung; Yen, Gary G
2008-10-01
Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjective optimization problems. However, the existing MOPSO designs generally adopt a notion to "estimate" a fixed population size sufficiently to explore the search space without incurring excessive computational complexity. To address the issue, this paper proposes the integration of a dynamic population strategy within the multiple-swarm MOPSO. The proposed algorithm is named dynamic population multiple-swarm MOPSO. An additional feature, adaptive local archives, is designed to improve the diversity within each swarm. Performance metrics and benchmark test functions are used to examine the performance of the proposed algorithm compared with that of five selected MOPSOs and two selected multiobjective evolutionary algorithms. In addition, the computational cost of the proposed algorithm is quantified and compared with that of the selected MOPSOs. The proposed algorithm shows competitive results with improved diversity and convergence and demands less computational cost. PMID:18784011
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Mekhmoukh, Abdenour; Mokrani, Karim
2015-11-01
In this paper, a new image segmentation method based on Particle Swarm Optimization (PSO) and outlier rejection combined with level set is proposed. A traditional approach to the segmentation of Magnetic Resonance (MR) images is the Fuzzy C-Means (FCM) clustering algorithm. The membership function of this conventional algorithm is sensitive to the outlier and does not integrate the spatial information in the image. The algorithm is very sensitive to noise and in-homogeneities in the image, moreover, it depends on cluster centers initialization. To improve the outlier rejection and to reduce the noise sensitivity of conventional FCM clustering algorithm, a novel extended FCM algorithm for image segmentation is presented. In general, in the FCM algorithm the initial cluster centers are chosen randomly, with the help of PSO algorithm the clusters centers are chosen optimally. Our algorithm takes also into consideration the spatial neighborhood information. These a priori are used in the cost function to be optimized. For MR images, the resulting fuzzy clustering is used to set the initial level set contour. The results confirm the effectiveness of the proposed algorithm. PMID:26299609
NASA Astrophysics Data System (ADS)
Izzuan Jaafar, Hazriq; Mohd Ali, Nursabillilah; Mohamed, Z.; Asmiza Selamat, Nur; Faiz Zainal Abidin, Amar; Jamian, J. J.; Kassim, Anuar Mohamed
2013-12-01
This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
OPTIMIZING EXPOSURE MEASUREMENT TECHNIQUES
The research reported in this task description addresses one of a series of interrelated NERL tasks with the common goal of optimizing the predictive power of low cost, reliable exposure measurements for the planned Interagency National Children's Study (NCS). Specifically, we w...
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
NASA Astrophysics Data System (ADS)
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer.
Yang, Sen; Li, Chengwei
2016-06-01
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods. PMID:27370427
Comparative analysis of PSO algorithms for PID controller tuning
NASA Astrophysics Data System (ADS)
Štimac, Goranka; Braut, Sanjin; Žigulić, Roberto
2014-09-01
The active magnetic bearing(AMB) suspends the rotating shaft and maintains it in levitated position by applying controlled electromagnetic forces on the rotor in radial and axial directions. Although the development of various control methods is rapid, PID control strategy is still the most widely used control strategy in many applications, including AMBs. In order to tune PID controller, a particle swarm optimization(PSO) method is applied. Therefore, a comparative analysis of particle swarm optimization(PSO) algorithms is carried out, where two PSO algorithms, namely (1) PSO with linearly decreasing inertia weight(LDW-PSO), and (2) PSO algorithm with constriction factor approach(CFA-PSO), are independently tested for different PID structures. The computer simulations are carried out with the aim of minimizing the objective function defined as the integral of time multiplied by the absolute value of error(ITAE). In order to validate the performance of the analyzed PSO algorithms, one-axis and two-axis radial rotor/active magnetic bearing systems are examined. The results show that PSO algorithms are effective and easily implemented methods, providing stable convergence and good computational efficiency of different PID structures for the rotor/AMB systems. Moreover, the PSO algorithms prove to be easily used for controller tuning in case of both SISO and MIMO system, which consider the system delay and the interference among the horizontal and vertical rotor axes.
Utilization of PSO algorithm in estimation of water level change of Lake Beysehir
NASA Astrophysics Data System (ADS)
Buyukyildiz, Meral; Tezel, Gulay
2015-12-01
In this study, unlike backpropagation algorithm which gets local best solutions, the usefulness of particle swarm optimization (PSO) algorithm, a population-based optimization technique with a global search feature, inspired by the behavior of bird flocks, in determination of parameters of support vector machines (SVM) and adaptive network-based fuzzy inference system (ANFIS) methods was investigated. For this purpose, the performances of hybrid PSO-ɛ support vector regression (PSO-ɛSVR) and PSO-ANFIS models were studied to estimate water level change of Lake Beysehir in Turkey. The change in water level was also estimated using generalized regression neural network (GRNN) method, an iterative training procedure. Root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R 2) were used to compare the obtained results. Efforts were made to estimate water level change (L) using different input combinations of monthly inflow-lost flow (I), precipitation (P), evaporation (E), and outflow (O). According to the obtained results, the other methods except PSO-ANN generally showed significantly similar performances to each other. PSO-ɛSVR method with the values of minMAE = 0.0052 m, maxMAE = 0.04 m, and medianMAE = 0.0198 m; minRMSE = 0.0070 m, maxRMSE = 0.0518 m, and medianRMSE = 0.0241 m; minR 2 = 0.9169, maxR 2 = 0.9995, medianR 2 = 0.9909 for the I-P-E-O combination in testing period became superior in forecasting water level change of Lake Beysehir than the other methods. PSO-ANN models were the least successful models in all combinations.
NASA Astrophysics Data System (ADS)
Yang, Yue; Wen, Jian; Chen, Xiaofei
2015-07-01
In this paper, we apply particle swarm optimization (PSO), an artificial intelligence technique, to velocity calibration in microseismic monitoring. We ran simulations with four 1-D layered velocity models and three different initial model ranges. The results using the basic PSO algorithm were reliable and accurate for simple models, but unsuccessful for complex models. We propose the staged shrinkage strategy (SSS) for the PSO algorithm. The SSS-PSO algorithm produced robust inversion results and had a fast convergence rate. We investigated the effects of PSO's velocity clamping factor in terms of the algorithm reliability and computational efficiency. The velocity clamping factor had little impact on the reliability and efficiency of basic PSO, whereas it had a large effect on the efficiency of SSS-PSO. Reassuringly, SSS-PSO exhibits marginal reliability fluctuations, which suggests that it can be confidently implemented.
NASA Astrophysics Data System (ADS)
Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.
2010-05-01
PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
A Random Time-Varying Particle Swarm Optimization for the Real Time Location Systems
NASA Astrophysics Data System (ADS)
Zhu, Hui; Tanabe, Yuji; Baba, Takaaki
The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of applications. This paper presents a random time variable PSO algorithm, called the PSO-RTVIWAC, introducing random time-varying inertia weight and acceleration coefficients to significantly improve the performance of the original algorithms. The PSO-RTVIWAC method originates from the random inertia weight (PSO-RANDIW) and time-varying acceleration coefficients (PSO-TVAC) methods. Through the efficient control of search and convergence to the global optimum solution, the PSO-RTVIWAC method is capable of tracking and optimizing the position evaluate in the highly nonlinear real-time location systems (RTLS). Experimental results are compared with three previous PSO approaches from the literatures, showing that the new optimizer significantly outperforms previous approaches. Simply employing a few particles and iterations, a reasonable good positioning accuracy is obtained with the PSO-RTVIWAC method. This property makes the PSO-RTVIWAC method become more attractive since the computation efficiency is improved considerably, i.e. the computation can be completed in an extremely short time, which is crucial for the RTLS. By implementing a hardware design of PSO-RTVIWAC, the computations can simultaneously be performed using hardware to reduce the processing time. Due to a small number of particles and iterations, the hardware resource is saved and the area cost is reduced in the FPGA implementation. An improvement of positioning accuracy is observed with PSO-RTVIWAC method, compared with Taylor Series Expansion (TSE) and Genetic Algorithm (GA). Our experiments on the PSO-RTVIWAC to track and optimize the position evaluate have demonstrated that it is especially effective in dealing with optimization functions in the nonlinear dynamic environments.
NASA Astrophysics Data System (ADS)
Kar, Subhajit; Sharma, Kaushik Das
2010-10-01
System identification is a ubiquitous necessity for successful applications in various fields. The area of system identification can be characterized by a small number of leading principles, e.g. to look for sustainable descriptions by proper decisions in the triangle of model complexity, information contents in the data, and effective validation. Particle Swarm Optimization (PSO) is a stochastic, population-based optimization algorithm and many variants of PSO have been developed since, including constrained, multi objective, and discrete or combinatorial versions and applications have been developed using PSO in many fields. The basic PSO algorithm implicitly utilizes a fully connected neighborhood topology. However, local neighborhood models have also been proposed for PSO long ago, where each particle has access to the information corresponding to its immediate neighbors, according to a certain swarm topology. In this local neighborhood model of PSO, particles have information only of their own and their nearest neighbors' bests, rather than that of the entire population of the swarm. In the present work basic PSO method and two of its local neighborhood variants are utilized for determining the optimal parameters of a dc motor. The result obtain from the simulation study demonstrate the usefulness of the proposed methodology.
Early Mission Design of Transfers to Halo Orbits via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.; Spencer, David B.; Hart, Terry J.
2016-03-01
Particle Swarm Optimization (PSO) is used to prune the search space of a low-thrust trajectory transfer from a high-altitude, Earth orbit to a Lagrange point orbit in the Earth-Moon system. Unlike a gradient based approach, this evolutionary PSO algorithm is capable of avoiding undesirable local minima. The PSO method is extended to a "local" version and uses a two dimensional search space that is capable of reducing the computation run-time by an order of magnitude when compared with published work. A technique for choosing appropriate PSO parameters is demonstrated and an example of an optimized trajectory is discussed.
Early Mission Design of Transfers to Halo Orbits via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.; Spencer, David B.; Hart, Terry J.
2016-06-01
Particle Swarm Optimization (PSO) is used to prune the search space of a low-thrust trajectory transfer from a high-altitude, Earth orbit to a Lagrange point orbit in the Earth-Moon system. Unlike a gradient based approach, this evolutionary PSO algorithm is capable of avoiding undesirable local minima. The PSO method is extended to a "local" version and uses a two dimensional search space that is capable of reducing the computation run-time by an order of magnitude when compared with published work. A technique for choosing appropriate PSO parameters is demonstrated and an example of an optimized trajectory is discussed.
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering.
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed. PMID:25972896
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed. PMID:25972896
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108
Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.
Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu
2015-10-01
Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms. PMID:26390177
A non-linear UAV altitude PSO-PD control
NASA Astrophysics Data System (ADS)
Orlando, Calogero
2015-12-01
In this work, a nonlinear model based approach is presented for the altitude stabilization of a hexarotor unmanned aerial vehicle (UAV). The mathematical model and control of the hexacopter airframe is presented. To stabilize the system along the vertical direction, a Proportional Derivative (PD) control is taken into account. A particle swarm optimization (PSO) approach is used in this paper to select the optimal parameters of the control algorithm taking into account different objective functions. Simulation sets are performed to carry out the results for the non-linear system to show how the PSO tuned PD controller leads to zero the error of the position along Z earth direction.
NASA Astrophysics Data System (ADS)
Jain, Narender Kumar; Nangia, Uma; Jain, Aishwary
2016-06-01
In this paper, multiobjective economic load dispatch (MELD) problem considering generation cost and transmission losses has been formulated using priority goal programming (PGP) technique. In this formulation, equality constraint has been considered by inclusion of penalty parameter K. It has been observed that fixing its value to 1,000 keeps the equality constraint within limits. The non-inferior set for IEEE 5, 14 and 30-bus systems has been generated by Particle Swarm Optimization (PSO) technique. The best compromise solution has been chosen as the one which gives equal percentage saving for both the objectives.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Evolutional Ant Colony Method Using PSO
NASA Astrophysics Data System (ADS)
Morii, Nobuto; Aiyoshi, Eitarou
The ant colony method is one of heuristic methods capable of solving the traveling salesman problem (TSP), in which a good tour is generated by the artificial ant's probabilistic behavior. However, the generated tour length depends on the parameter describing the ant's behavior, and the best parameters corresponding to the problem to be solved is unknown. In this technical note, the evolutional strategy is presented to find the best parameter of the ant colony by using Particle Swarm Optimization (PSO) in the parameter space. Numerical simulations for benchmarks demonstrate effectiveness of the evolutional ant colony method.
Multiple Sequence Alignment Based on Chaotic PSO
NASA Astrophysics Data System (ADS)
Lei, Xiu-Juan; Sun, Jing-Jing; Ma, Qian-Zhi
This paper introduces a new improved algorithm called chaotic PSO (CPSO) based on the thought of chaos optimization to solve multiple sequence alignment. For one thing, the chaotic variables are generated between 0 and 1 when initializing the population so that the particles are distributed uniformly in the solution space. For another thing, the chaotic sequences are generated using the Logistic mapping function in order to make chaotic search and strengthen the diversity of the population. The simulation results of several benchmark data sets of BAliBase show that the improved algorithm is effective and has good performances for the data sets with different similarity.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer.
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration. PMID:25035936
Chen, Shyi-Ming; Hsin, Wen-Chyuan
2015-07-01
In this paper, we propose a new weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on the slopes of fuzzy sets. We also propose a particle swarm optimization (PSO)-based weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of fuzzy rules for weighted fuzzy interpolative reasoning. We apply the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm to deal with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. The experimental results show that the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm outperforms the existing methods for dealing with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. PMID:25204003
A technique for optimizing grid blocks
NASA Technical Reports Server (NTRS)
Dannenhoffer, John F., III
1995-01-01
A new technique for automatically combining grid blocks of a given block-structured grid into logically-rectangular clusters which are 'optimal' is presented. This technique uses the simulated annealing optimization method to reorganize the blocks into an optimum configuration, that is, one which minimizes a user-defined objective function such as the number of clusters or the differential in the sizes of all the clusters. The clusters which result from applying the technique to two different two-dimensional configurations are presented for a variety of objective function definitions. In all cases, the automatically-generated clusters are significantly better than the original clusters. While this new technique can be applied to block-structured grids generated from any source, it is particularly useful for operating on block-structured grids containing many blocks, such as those produced by the emerging automatic block-structured grid generators.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Multiobjective optimization techniques for structural design
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Language abstractions for low level optimization techniques
NASA Astrophysics Data System (ADS)
Dévai, Gergely; Gera, Zoltán; Kelemen, Zoltán
2012-09-01
In case of performance critical applications programmers are often forced to write code at a low abstraction level. This leads to programs that are hard to develop and maintain because the program text is mixed up by low level optimization tricks and is far from the algorithm it implements. Even if compilers are smart nowadays and provide the user with many automatically applied optimizations, practice shows that in some cases it is hopeless to optimize the program automatically without the programmer's knowledge. A complementary approach is to allow the programmer to fine tune the program but provide him with language features that make the optimization easier. These are language abstractions that make optimization techniques explicit without adding too much syntactic noise to the program text. This paper presents such language abstractions for two well-known optimizations: bitvectors and SIMD (Single Instruction Multiple Data). The language features are implemented in the embedded domain specific language Feldspar which is specifically tailored for digital signal processing applications. While we present these language elements as part of Feldspar, the ideas behind them are general enough to be applied in other language definition projects as well.
The contribution of particle swarm optimization to three-dimensional slope stability analysis.
Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis
A Rashid, Ahmad Safuan; Ali, Nazri
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2014-04-01
Much effort is devoted nowadays to derive accurate finite element (FE) models to be used for structural health monitoring, damage detection and assessment. However, formation of a FE model representative of the original structure is a difficult task. Model updating is a branch of optimization which calibrates the FE model by comparing the modal properties of the actual structure with these of the FE predictions. As the number of experimental measurements is usually much smaller than the number of uncertain parameters, and, consequently, not all uncertain parameters are selected for model updating, different local minima may exist in the solution space. Experimental noise further exacerbates the problem. The attainment of a global solution in a multi-dimensional search space is a challenging problem. Global optimization algorithms (GOAs) have received interest in the previous decade to solve this problem, but no GOA can ensure the detection of the global minimum either. To counter this problem, a combination of GOA with sequential niche technique (SNT) has been proposed in this research which systematically searches the whole solution space. A dynamically tested full scale pedestrian bridge is taken as a case study. Two different GOAs, namely particle swarm optimization (PSO) and genetic algorithm (GA), are investigated in combination with SNT. The results of these GOA are compared in terms of their efficiency in detecting global minima. The systematic search enables to find different solutions in the search space, thus increasing the confidence of finding the global minimum.
McDaniel, R D
1999-01-01
The Balanced Budget Act of 1997 established the new Medicare+Choice program which provides a variety of alternatives to traditional Medicare Part A and Part B, including the provider sponsored organization (PSO). Over the next several years, a significant number of organizations will consider becoming a PSO. The decision requires a thorough and detailed review of critical success factors. This articles outlines those factors and defines some components of a successful PSO. PMID:10539339
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Cache Energy Optimization Techniques For Modern Processors
Mittal, Sparsh
2013-01-01
and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.
Performance of Multi-chaotic PSO on a shifted benchmark functions set
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
Global Optimization Techniques for Fluid Flow and Propulsion Devices
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Raj; Tucker, Kevin; Griffin, Lisa; Dorney, Dan; Huber, Frank; Tran, Ken; Turner, James E. (Technical Monitor)
2001-01-01
This viewgraph presentation gives an overview of global optimization techniques for fluid flow and propulsion devices. Details are given on the need, characteristics, and techniques for global optimization. The techniques include response surface methodology (RSM), neural networks and back-propagation neural networks, design of experiments, face centered composite design (FCCD), orthogonal arrays, outlier analysis, and design optimization.
Solving initial and boundary value problems using learning automata particle swarm optimization
NASA Astrophysics Data System (ADS)
Nemati, Kourosh; Mariyam Shamsuddin, Siti; Darus, Maslina
2015-05-01
In this article, the particle swarm optimization (PSO) algorithm is modified to use the learning automata (LA) technique for solving initial and boundary value problems. A constrained problem is converted into an unconstrained problem using a penalty method to define an appropriate fitness function, which is optimized using the LA-PSO method. This method analyses a large number of candidate solutions of the unconstrained problem with the LA-PSO algorithm to minimize an error measure, which quantifies how well a candidate solution satisfies the governing ordinary differential equations (ODEs) or partial differential equations (PDEs) and the boundary conditions. This approach is very capable of solving linear and nonlinear ODEs, systems of ordinary differential equations, and linear and nonlinear PDEs. The computational efficiency and accuracy of the PSO algorithm combined with the LA technique for solving initial and boundary value problems were improved. Numerical results demonstrate the high accuracy and efficiency of the proposed method.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality. PMID:26989404
An Integrated Method Based on PSO and EDA for the Max-Cut Problem
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality. PMID:26989404
A PSO-Based Approach for Pathway Marker Identification From Gene Expression Data.
Mandal, Monalisa; Mondal, Jyotirmay; Mukhopadhyay, Anirban
2015-09-01
In this article, a new and robust pathway activity inference scheme is proposed from gene expression data using Particle Swarm Optimization (PSO). From microarray gene expression data, the corresponding pathway information of the genes are collected from a public database. For identifying the pathway markers, the expression values of each pathway consisting of genes, termed as pathway activity, are summarized. To measure the goodness of a pathway activity vector, t-score is widely used in the existing literature. The weakness of existing techniques for inferring pathway activity is that they intend to consider all the member genes of a pathway. But in reality, all the member genes may not be significant to the corresponding pathway. Therefore, those genes, which are responsible in the corresponding pathway, should be included only. Motivated by this, in the proposed method, using PSO, important genes with respect to each pathway are identified. The objective is to maximize the average t-score. For the pathway activities inferred from different percentage of significant pathways, the average absolute t -scores are plotted. In addition, the top 50% pathway markers are evaluated using 10-fold cross validation and its performance is compared with that of other existing techniques. Biological relevance of the results is also studied. PMID:25935045
Techniques for optimizing inerting in electron processors
NASA Astrophysics Data System (ADS)
Rangwalla, I. J.; Korn, D. J.; Nablo, S. V.
1993-07-01
The design of an "inert gas" distribution system in an electron processor must satisfy a number of requirements. The first of these is the elimination or control of beam produced ozone and NO x which can be transported from the process zone by the product into the work area. Since the tolerable levels for O 3 in occupied areas around the processor are <0.1 ppm, good control techniques are required involving either recombination of the O 3 in the beam heated process zone, or exhausting and dilution of the gas at the processor exit. The second requirement of the inerting system is to provide a suitable environment for completing efficient, free radical initiated addition polymerization. In this case, the competition between radical loss through de-excitation and that from O 2 quenching must be understood. This group has used gas chromatographic analysis of electron cured coatings to study the trade-offs of delivered dose, dose rate and O 2 concentrations in the process zone to determine the tolerable ranges of parameter excursions can be determined for production quality control purposes. These techniques are described for an ink:coating system on paperboard, where a broad range of process parameters have been studied (D, Ġ, O 2. It is then shown how the technique is used to optimize the use of higher purity (10-100 ppm O 2) nitrogen gas for inerting, in combination with lower purity (2-20, 000 ppm O 2) non-cryogenically produced gas, as from a membrane or pressure swing adsorption generators.
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
Introducing the fractional order robotic Darwinian PSO
NASA Astrophysics Data System (ADS)
Couceiro, Micael S.; Martins, Fernando M. L.; Rocha, Rui P.; Ferreira, Nuno M. F.
2012-11-01
The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm using fractional calculus concepts to control the convergence rate, while considering the robot dynamical characteristics. Moreover, to improve the convergence analysis of the RDPSO, an adjustment of the fractional coefficient based on mobile robot constraints is presented and experimentally assessed with 2 real platforms. Afterwards, this novel fractional-order RDPSO is evaluated in 12 physical robots being further explored using a larger population of 100 simulated mobile robots within a larger scenario. Experimental results show that changing the fractional coefficient does not significantly improve the final solution but presents a significant influence in the convergence time because of its inherent memory property.
A PSO-PID quaternion model based trajectory control of a hexarotor UAV
NASA Astrophysics Data System (ADS)
Artale, Valeria; Milazzo, Cristina L. R.; Orlando, Calogero; Ricciardello, Angela
2015-12-01
A quaternion based trajectory controller for a prototype of an Unmanned Aerial Vehicle (UAV) is discussed in this paper. The dynamics of the UAV, a hexarotor in details, is described in terms of quaternion instead of the usual Euler angle parameterization. As UAV flight management concerns, the method here implemented consists of two main steps: trajectory and attitude control via Proportional-Integrative-Derivative (PID) and Proportional-Derivative (PD) technique respectively and the application of Particle Swarm Optimization (PSO) method in order to tune the PID and PD parameters. The optimization is the consequence of the minimization of a objective function related to the error with the respect to a proper trajectory. Numerical simulations support and validate the proposed method.
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
Acoustic emission location on aluminum alloy structure by using FBG sensors and PSO method
NASA Astrophysics Data System (ADS)
Lu, Shizeng; Jiang, Mingshun; Sui, Qingmei; Dong, Huijun; Sai, Yaozhang; Jia, Lei
2016-04-01
Acoustic emission location is important for finding the structural crack and ensuring the structural safety. In this paper, an acoustic emission location method by using fiber Bragg grating (FBG) sensors and particle swarm optimization (PSO) algorithm were investigated. Four FBG sensors were used to form a sensing network to detect the acoustic emission signals. According to the signals, the quadrilateral array location equations were established. By analyzing the acoustic emission signal propagation characteristics, the solution of location equations was converted to an optimization problem. Thus, acoustic emission location can be achieved by using an improved PSO algorithm, which was realized by using the information fusion of multiple standards PSO, to solve the optimization problem. Finally, acoustic emission location system was established and verified on an aluminum alloy plate. The experimental results showed that the average location error was 0.010 m. This paper provided a reliable method for aluminum alloy structural acoustic emission location.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
Optimal control techniques for active noise suppression
NASA Technical Reports Server (NTRS)
Banks, H. T.; Keeling, S. L.; Silcox, R. J.
1988-01-01
Active suppression of noise in a bounded enclosure is considered within the framework of optimal control theory. A sinusoidal pressure field due to exterior offending noise sources is assumed to be known in a neighborhood of interior sensors. The pressure field due to interior controlling sources is assumed to be governed by a nonhomogeneous wave equation within the enclosure and by a special boundary condition designed to accommodate frequency-dependent reflection properties of the enclosure boundary. The form of the controlling sources is determined by considering the steady-state behavior of the system, and it is established that the control strategy proposed is stable and asymptotically optimal.
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
Neural network training with global optimization techniques.
Yamazaki, Akio; Ludermir, Teresa B
2003-04-01
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920
An RBF-PSO based approach for modeling prostate cancer
NASA Astrophysics Data System (ADS)
Perracchione, Emma; Stura, Ilaria
2016-06-01
Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.
Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun
2014-01-01
One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136
Optimization of Lamb wave inspection techniques
NASA Astrophysics Data System (ADS)
Alleyne, David N.; Cawley, Peter
Some problems associated with Lamb wave inspection techniques are briefly reviewed, and factors to be considered when selecting a practical Lamb wave inspection regime and ways to minimize possible problems are discussed. Tests on a butt-welded steel plate with simulated weld defects of different depths demonstrate that, operating below the a1 cut-off frequency with judicious selection of the testing technique, the presence of defects with depths around 30 percent of the plate thickness can be detected reliably from changes in the shape of the received waveform, The 2D Fourier transform method makes it possible to determine the amplitudes of the different propagating Lamb modes over the full frequency range of the input, yielding information which can be used for defect sizing.
Optimizing ECM techniques against monopulse acquisition and tracking radars
NASA Astrophysics Data System (ADS)
Kwon, Ki Hoon
1989-09-01
Electronic countermeasure (ECM) techniques against monopulse radars, which are generally employed in the Surface-to-Air Missile targeting system, are presented and analyzed. Particularly, these ECM techniques are classified into five different categories, which are; denial jamming, deception jamming, passive countermeasures, decoys, and destructive countermeasures. The techniques are fully discussed. It was found difficult to quantize the jamming effectiveness of individual techniques, because ECM techniques are involved with several complex parameters and they are usually entangled together. Therefore, the methodological approach for optimizing ECM techniques is based on purely conceptual analysis of the techniques.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Optimal and suboptimal control technique for aircraft spin recovery
NASA Technical Reports Server (NTRS)
Young, J. W.
1974-01-01
An analytic investigation has been made of procedures for effecting recovery from equilibrium spin conditions for three assumed aircraft configurations. Three approaches which utilize conventional aerodynamic controls are investigated. Included are a constant control recovery mode, optimal recoveries, and a suboptimal control logic patterned after optimal recovery results. The optimal and suboptimal techniques are shown to yield a significant improvement in recovery performance over that attained by using a constant control recovery procedure.
An investigation of optimization techniques for drawing computer graphics displays
NASA Technical Reports Server (NTRS)
Stocker, F. R.
1979-01-01
Techniques for reducing vector data plotting time are studied. The choice of tolerances in optimization and the application of optimization to plots produced on real time interactive display devices are discussed. All results are developed relative to plotting packages and support hardware so that results are useful in real world situations.
A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis.
Raja, Chandrasekaran; Gangatharan, Narayanan
2015-08-01
Glaucoma is among the most common causes of permanent blindness in human. Because the initial symptoms are not evident, mass screening would assist early diagnosis in the vast population. Such mass screening requires an automated diagnosis technique. Our proposed automation consists of pre-processing, optimal wavelet transformation, feature extraction, and classification modules. The hyper analytic wavelet transformation (HWT) based statistical features are extracted from fundus images. Because HWT preserves phase information, it is appropriate for feature extraction. The features are then classified by a Support Vector Machine (SVM) with a radial basis function (RBF) kernel. The filter coefficients of the wavelet transformation process and the SVM-RB width parameter are simultaneously tailored to best-fit the diagnosis by the hybrid Particle Swarm algorithm. To overcome premature convergence, a Group Search Optimizer (GSO) random searching (ranging) and area scanning behavior (around the optima) are embedded within the Particle Swarm Optimization (PSO) framework. We also embed a novel potential-area scanning as a preventive mechanism against premature convergence, rather than diagnosis and cure. This embedding does not compromise the generality and utility of PSO. In two 10-fold cross-validated test runs, the diagnostic accuracy of the proposed hybrid PSO exceeded that of conventional PSO. Furthermore, the hybrid PSO maintained the ability to explore even at later iterations, ensuring maturity in fitness. PMID:26093787
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
Chang, Wei-Der
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Assessment of PSO compliance. 3.110 Section 3.110... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110 Assessment of PSO compliance. The Secretary may request information or conduct announced or...
Evaluation of stochastic reservoir operation optimization models
NASA Astrophysics Data System (ADS)
Celeste, Alcigeimes B.; Billib, Max
2009-09-01
This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.
IR and visual image registration based on mutual information and PSO-Powell algorithm
NASA Astrophysics Data System (ADS)
Zhuang, Youwen; Gao, Kun; Miu, Xianghu
2014-11-01
Infrared and visual image registration has a wide application in the fields of remote sensing and military. Mutual information (MI) has proved effective and successful in infrared and visual image registration process. To find the most appropriate registration parameters, optimal algorithms, such as Particle Swarm Optimization (PSO) algorithm or Powell search method, are often used. The PSO algorithm has strong global search ability and search speed is fast at the beginning, while the weakness is low search performance in late search stage. In image registration process, it often takes a lot of time to do useless search and solution's precision is low. Powell search method has strong local search ability. However, the search performance and time is more sensitive to initial values. In image registration, it is often obstructed by local maximum and gets wrong results. In this paper, a novel hybrid algorithm, which combined PSO algorithm and Powell search method, is proposed. It combines both advantages that avoiding obstruction caused by local maximum and having higher precision. Firstly, using PSO algorithm gets a registration parameter which is close to global minimum. Based on the result in last stage, the Powell search method is used to find more precision registration parameter. The experimental result shows that the algorithm can effectively correct the scale, rotation and translation additional optimal algorithm. It can be a good solution to register infrared difference of two images and has a greater performance on time and precision than traditional and visible images.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system. PMID:18179066
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Robinson, Y Harold; Rajaram, M
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Process sequence optimization for digital microfluidic integration using EWOD technique
NASA Astrophysics Data System (ADS)
Yadav, Supriya; Joyce, Robin; Sharma, Akash Kumar; Sharma, Himani; Sharma, Niti Nipun; Varghese, Soney; Akhtar, Jamil
2016-04-01
Micro/nano-fluidic MEMS biosensors are the devices that detects the biomolecules. The emerging micro/nano-fluidic devices provide high throughput and high repeatability with very low response time and reduced device cost as compared to traditional devices. This article presents the experimental details for process sequence optimization of digital microfluidics (DMF) using "electrowetting-on-dielectric" (EWOD). Stress free thick film deposition of silicon dioxide using PECVD and subsequent process for EWOD techniques have been optimized in this work.
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Stochastic optimization techniques for NDE of bridges using vibration signatures
NASA Astrophysics Data System (ADS)
Yi, Jin-Hak; Feng, Maria Q.
2003-08-01
A baseline model updating is the first step for the model-based non destructive evaluation for civil infrastructures. Many researches have been drawn to obtain a more reliable baseline model. In this study, heuristic optimization techniques (or called as stochastic optimization techniques) including the genetic algorithm, the simulated annealing, and the tabu search, were have been investigated for constructing the reliable baseline model for an instrumented new highway bridge, and also were compared with the result of conventional sensitivity method. The preliminary finite element model of the bridge was successfully updated to a baseline model based on measured vibration data.
Discrete particle swarm optimization for identifying community structures in signed social networks.
Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng
2014-10-01
Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. PMID:24856248
NASA Astrophysics Data System (ADS)
Wu, Li-Li; Zhou, Qihou H.; Chen, Tie-Jun; Liang, J. J.; Wu, Xin
2015-09-01
Simultaneous derivation of multiple ionospheric parameters from the incoherent scatter power spectra in the F1 region is difficult because the spectra have only subtle differences for different combinations of parameters. In this study, we apply a particle swarm optimizer (PSO) to incoherent scatter power spectrum fitting and compare it to the commonly used least squares fitting (LSF) technique. The PSO method is found to outperform the LSF method in practically all scenarios using simulated data. The PSO method offers the advantages of not being sensitive to initial assumptions and allowing physical constraints to be easily built into the model. When simultaneously fitting for molecular ion fraction (fm), ion temperature (Ti), and ratio of ion to electron temperature (γT), γT is largely stable. The uncertainty between fm and Ti can be described as a quadratic relationship. The significance of this result is that Ti can be retroactively corrected for data archived many years ago where the assumption of fm may not be accurate, and the original power spectra are unavailable. In our discussion, we emphasize the fitting for fm, which is a difficult parameter to obtain. PSO method is often successful in obtaining fm, whereas LSF fails. We apply both PSO and LSF to actual observations made by the Arecibo incoherent scatter radar. The results show that PSO method is a viable method to simultaneously determine ion and electron temperatures and molecular ion fraction when the last is greater than 0.3.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously
Techniques for trajectory optimization using a hybrid computer
NASA Technical Reports Server (NTRS)
Neely, P. L.
1975-01-01
The use of a hybrid computer in the solution of trajectory optimization problems is described. The solution technique utilizes the indirect method and requires iterative computation of the initial condition vector of the co-state variables. Convergence of the iteration is assisted by feedback switching and contour modification. A simulation of the method in an on-line updating scheme is presented.
Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique
NASA Astrophysics Data System (ADS)
Oonsivilai, Anant; Marungsri, Boonruang
2008-10-01
An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.
Satellite tracking by combined optimal estimation and control techniques.
NASA Technical Reports Server (NTRS)
Dressler, R. M.; Tabak, D.
1971-01-01
Combined optimal estimation and control techniques are applied for the first time to satellite tracking systems. Both radio antenna and optical tracking systems of NASA are considered. The optimal estimation is accomplished using an extended Kalman filter resulting in an estimated state of the satellite and of the tracking system. This estimated state constitutes an input to the optimal controller. The optimal controller treats a linearized system with a quadratic performance index. The maximum principle is applied and a steady-state approximation to the resulting Riccati equation is obtained. A computer program, RATS, implementing this algorithm is described. A feasibility study of real-time implementation, tracking simulations, and parameter sensitivity studies are also reported.
A Prototype of Energy Saving System for Office Lighting by Using PSO and WSN
NASA Astrophysics Data System (ADS)
Si, Wa; Ogai, Harutoshi; Hirai, Katsumi; Takahashi, Hidehiro; Ogawa, Masatoshi
The purpose of this study is to develop a wireless networked lighting system for office buildings, which can reduce the energy consumption while meeting users' lighting preferences. By using particle swarm optimization, the system is able to optimize the dimming ratio of luminaires according to the real time natural illumination and occupancy condition. In this paper we make a prototype system and test the feasibility and efficiency of the system. The prototype consists of one wireless control module, three illumination sensors and four fluorescent lamps with dimming capacity. The illumination sensors collect and send the data to the control module. After the process of PSO (Particle Swarm Optimization), the module finally sets the power of the lamps according to the PSO result. After real experiments in a certain designed office, it was proved that the system can successfully control the illuminations, and can save considerable energy.
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. PMID:26592652
A method to objectively optimize coral bleaching prediction techniques
NASA Astrophysics Data System (ADS)
van Hooidonk, R. J.; Huber, M.
2007-12-01
Thermally induced coral bleaching is a global threat to coral reef health. Methodologies, e.g. the Degree Heating Week technique, have been developed to predict bleaching induced by thermal stress by utilizing remotely sensed sea surface temperature (SST) observations. These techniques can be used as a management tool for Marine Protected Areas (MPA). Predictions are valuable to decision makers and stakeholders on weekly to monthly time scales and can be employed to build public awareness and support for mitigation. The bleaching problem is only expected to worsen because global warming poses a major threat to coral reef health. Indeed, predictive bleaching methods combined with climate model output have been used to forecast the global demise of coral reef ecosystems within coming decades due to climate change. Accuracy of these predictive techniques has not been quantitatively characterized despite the critical role they play. Assessments have typically been limited, qualitative or anecdotal, or more frequently they are simply unpublished. Quantitative accuracy assessment, using well established methods and skill scores often used in meteorology and medical sciences, will enable objective optimization of existing predictive techniques. To accomplish this, we will use existing remotely sensed data sets of sea surface temperature (AVHRR and TMI), and predictive values from techniques such as the Degree Heating Week method. We will compare these predictive values with observations of coral reef health and calculate applicable skill scores (Peirce Skill Score, Hit Rate and False Alarm Rate). We will (a) quantitatively evaluate the accuracy of existing coral reef bleaching predictive methods against state-of- the-art reef health databases, and (b) present a technique that will objectively optimize the predictive method for any given location. We will illustrate this optimization technique for reefs located in Puerto Rico and the US Virgin Islands.
FRAN and RBF-PSO as two components of a hyper framework to recognize protein folds.
Abbasi, Elham; Ghatee, Mehdi; Shiri, M E
2013-09-01
In this paper, an intelligent hyper framework is proposed to recognize protein folds from its amino acid sequence which is a fundamental problem in bioinformatics. This framework includes some statistical and intelligent algorithms for proteins classification. The main components of the proposed framework are the Fuzzy Resource-Allocating Network (FRAN) and the Radial Bases Function based on Particle Swarm Optimization (RBF-PSO). FRAN applies a dynamic method to tune up the RBF network parameters. Due to the patterns complexity captured in protein dataset, FRAN classifies the proteins under fuzzy conditions. Also, RBF-PSO applies PSO to tune up the RBF classifier. Experimental results demonstrate that FRAN improves prediction accuracy up to 51% and achieves acceptable multi-class results for protein fold prediction. Although RBF-PSO provides reasonable results for protein fold recognition up to 48%, it is weaker than FRAN in some cases. However the proposed hyper framework provides an opportunity to use a great range of intelligent methods and can learn from previous experiences. Thus it can avoid the weakness of some intelligent methods in terms of memory, computational time and static structure. Furthermore, the performance of this system can be enhanced throughout the system life-cycle. PMID:23930812
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Fan, Yiren; Cao, Yingchang; Wang, Yang; Cong, Yunhai; Liu, Lailei
2015-06-01
To allow peak searching and parameter estimation for geological and geophysical data with multi-peak distributions, we explore a hybrid method based on a combination of the particle swarm optimization (PSO) and generalized reduced gradient (GRG) algorithms. After characterizing peaks using the additive Gaussian function, a nonlinear objective function is established, which transforms our task into a search for optimal solutions. In this process, PSO is used to obtain the initial values, aiming for global convergence, while GRG is subsequently implemented for higher stability. Iterations are stopped when the convergence criteria are satisfied. Finally, grayscale histograms of backscattering electron images of sandstone show that the proposed algorithm performs much better than other methods such as PSO, GRG, simulated annealing and differential evolution, achieving a faster convergence speed and minimal variances.
Model reduction using new optimal Routh approximant technique
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San
1992-01-01
An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.
A technique for noise measurement optimization with spectrum analyzers
NASA Astrophysics Data System (ADS)
Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.
2015-08-01
Measuring low noise of electronic devices with a spectrum analyzer requires particular care as the instrument could add significant contributions. A Low Noise Amplifier, LNA, is therefore necessary to be connected between the source to be measured and the instrument, to mitigate its effect at the LNA input. In the present work we suggest a technique for the implementation of the LNA that allows to optimize both low frequency noise and white noise, obtaining outstanding performance in a very broad frequency range.
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
Optimization of backward giant circle technique on the asymmetric bars.
Hiley, Michael J; Yeadon, Maurice R
2007-11-01
The release window for a given dismount from the asymmetric bars is the period of time within which release results in a successful dismount. Larger release windows are likely to be associated with more consistent performance because they allow a greater margin for error in timing the release. A computer simulation model was used to investigate optimum technique for maximizing release windows in asymmetric bars dismounts. The model comprised four rigid segments with the elastic properties of the gymnast and bar modeled using damped linear springs. Model parameters were optimized to obtain a close match between simulated and actual performances of three gymnasts in terms of rotation angle (1.5 degrees ), bar displacement (0.014 m), and release velocities (<1%). Three optimizations to maximize the release window were carried out for each gymnast involving no perturbations, 10-ms perturbations, and 20-ms perturbations in the timing of the shoulder and hip joint movements preceding release. It was found that the optimizations robust to 20-ms perturbations produced release windows similar to those of the actual performances whereas the windows for the unperturbed optimizations were up to twice as large. It is concluded that robustness considerations must be included in optimization studies in order to obtain realistic results and that elite performances are likely to be robust to timing perturbations of the order of 20 ms. PMID:18089928
NASA Astrophysics Data System (ADS)
Bera, Sasadhar; Mukherjee, Indrajit
2010-10-01
Ensuring quality of a product is rarely based on observations of a single quality characteristic. Generally, it is based on observations of family of properties, so-called `multiple responses'. These multiple responses are often interacting and are measured in variety of units. Due to presence of interaction(s), overall optimal conditions for all the responses rarely result from isolated optimal condition of individual response. Conventional optimization techniques, such as design of experiment, linear and nonlinear programmings are generally recommended for single response optimization problems. Applying any of these techniques for multiple response optimization problem may lead to unnecessary simplification of the real problem with several restrictive model assumptions. In addition, engineering judgements or subjective ways of decision making may play an important role to apply some of these conventional techniques. In this context, a synergistic approach of desirability functions and metaheuristic technique is a viable alternative to handle multiple response optimization problems. Metaheuristics, such as simulated annealing (SA) and particle swarm optimization (PSO), have shown immense success to solve various discrete and continuous single response optimization problems. Instigated by those successful applications, this chapter assesses the potential of a Nelder-Mead simplex-based SA (SIMSA) and PSO to resolve varied multiple response optimization problems. The computational results clearly indicate the superiority of PSO over SIMSA for the selected problems.
An Improved Fuzzy c-Means Clustering Algorithm Based on Shadowed Sets and PSO
Zhang, Jian; Shen, Ling
2014-01-01
To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect. PMID:25477953
Technique Developed for Optimizing Traveling-Wave Tubes
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1999-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).
Lin, Wei-Qi; Jiang, Jian-Hui; Zhou, Yan-Ping; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin
2007-01-30
Multilayer feedforward neural networks (MLFNNs) are important modeling techniques widely used in QSAR studies for their ability to represent nonlinear relationships between descriptors and activity. However, the problems of overfitting and premature convergence to local optima still pose great challenges in the practice of MLFNNs. To circumvent these problems, a support vector machine (SVM) based training algorithm for MLFNNs has been developed with the incorporation of particle swarm optimization (PSO). The introduction of the SVM based training mechanism imparts the developed algorithm with inherent capacity for combating the overfitting problem. Moreover, with the implementation of PSO for searching the optimal network weights, the SVM based learning algorithm shows relatively high efficiency in converging to the optima. The proposed algorithm has been evaluated using the Hansch data set. Application to QSAR studies of the activity of COX-2 inhibitors is also demonstrated. The results reveal that this technique provides superior performance to backpropagation (BP) and PSO training neural networks. PMID:17186488
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Techniques for developing reliability-oriented optimal microgrid architectures
NASA Astrophysics Data System (ADS)
Patra, Shashi B.
2007-12-01
Alternative generation technologies such as fuel cells, micro-turbines, solar etc. have been the focus of active research in the past decade. These energy sources are small and modular. Because of these advantages, these sources can be deployed effectively at or near locations where they are actually needed, i.e. in the distribution network. This is in contrast to the traditional electricity generation which has been "centralized" in nature. The new technologies can be deployed in a "distributed" manner. Therefore, they are also known as Distributed Energy Resources (DER). It is expected that the use of DER, will grow significantly in the future. Hence, it is prudent to interconnect the energy resources in a meshed or grid-like structure, so as to exploit the reliability and economic benefits of distributed deployment. These grids, which are smaller in scale but similar to the electric transmission grid, are known as "microgrids". This dissertation presents rational methods of building microgrids optimized for cost and subject to system-wide and locational reliability guarantees. The first method is based on dynamic programming and consists of determining the optimal interconnection between microsources and load points, given their locations and the rights of way for possible interconnections. The second method is based on particle swarm optimization. This dissertation describes the formulation of the optimization problem and the solution methods. The applicability of the techniques is demonstrated in two possible situations---design of a microgrid from scratch and expansion of an existing distribution system.
42 CFR 3.110 - Assessment of PSO compliance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subpart and for these purposes will be allowed to inspect the physical or virtual sites maintained or... SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements and Agency Procedures § 3.110... PSO records may include patient safety work product in accordance with § 3.206(d) of this part....
Optimization Techniques for 3D Graphics Deployment on Mobile Devices
NASA Astrophysics Data System (ADS)
Koskela, Timo; Vatjus-Anttila, Jarkko
2015-03-01
3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.
On combining Laplacian and optimization-based mesh smoothing techniques
Freitag, L.A.
1997-07-01
Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2011-12-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
A Hybrid PSO-DEFS Based Feature Selection for the Identification of Diabetic Retinopathy.
Balakrishnan, Umarani; Venkatachalapathy, Krishnamurthi; Marimuthu, Girirajkumar S
2015-01-01
Diabetic Retinopathy (DR) is an eye disease, which may cause blindness by the upsurge of insulin in blood. The major cause of visual loss in diabetic patient is macular edema. To diagnose and follow up Diabetic Macular Edema (DME), a powerful Optical Coherence Tomography (OCT) technique is used for the clinical assessment. Many existing methods found out the DME affected patients by estimating the fovea thickness. These methods have the issues of lower accuracy and higher time complexity. In order to overwhelm the above limitations, a hybrid approaches based DR detection is introduced in the proposed work. At first, the input image is preprocessed using green channel extraction and median filter. Subsequently, the features are extracted by gradient-based features like Histogram of Oriented Gradient (HOG) with Complete Local Binary Pattern (CLBP). The texture features are concentrated with various rotations to calculate the edges. We present a hybrid feature selection that combines the Particle Swarm Optimization (PSO) and Differential Evolution Feature Selection (DEFS) for minimizing the time complexity. A binary Support Vector Machine (SVM) classifier categorizes the 13 normal and 75 abnormal images from 60 patients. Finally, the patients affected by DR are further classified by Multi-Layer Perceptron (MLP). The experimental results exhibit better performance of accuracy, sensitivity, and specificity than the existing methods. PMID:25817547
Solving constrained optimization problems with hybrid particle swarm optimization
NASA Astrophysics Data System (ADS)
Zahara, Erwie; Hu, Chia-Hsin
2008-11-01
Constrained optimization problems (COPs) are very important in that they frequently appear in the real world. A COP, in which both the function and constraints may be nonlinear, consists of the optimization of a function subject to constraints. Constraint handling is one of the major concerns when solving COPs with particle swarm optimization (PSO) combined with the Nelder-Mead simplex search method (NM-PSO). This article proposes embedded constraint handling methods, which include the gradient repair method and constraint fitness priority-based ranking method, as a special operator in NM-PSO for dealing with constraints. Experiments using 13 benchmark problems are explained and the NM-PSO results are compared with the best known solutions reported in the literature. Comparison with three different meta-heuristics demonstrates that NM-PSO with the embedded constraint operator is extremely effective and efficient at locating optimal solutions.
Modiri, A; Gu, X; Sawant, A
2014-06-15
Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. These plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.
High-level power analysis and optimization techniques
NASA Astrophysics Data System (ADS)
Raghunathan, Anand
1997-12-01
This thesis combines two ubiquitous trends in the VLSI design world--the move towards designing at higher levels of design abstraction, and the increasing importance of power consumption as a design metric. Power estimation and optimization tools are becoming an increasingly important part of design flows, driven by a variety of requirements such as prolonging battery life in portable computing and communication devices, thermal considerations and system cooling and packaging costs, reliability issues (e.g. electromigration, ground bounce, and I-R drops in the power network), and environmental concerns. This thesis presents a suite of techniques to automatically perform power analysis and optimization for designs at the architecture or register-transfer, and behavior or algorithm levels of the design hierarchy. High-level synthesis refers to the process of synthesizing, from an abstract behavioral description, a register-transfer implementation that satisfies the desired constraints. High-level synthesis tools typically perform one or more of the following tasks: transformations, module selection, clock selection, scheduling, and resource allocation and assignment (also called resource sharing or hardware sharing). High-level synthesis techniques for minimizing the area, maximizing the performance, and enhancing the testability of the synthesized designs have been investigated. This thesis presents high-level synthesis techniques that minimize power consumption in the synthesized data paths. This thesis investigates the effects of resource sharing on the power consumption in the data path, provides techniques to efficiently estimate power consumption during resource sharing, and resource sharing algorithms to minimize power consumption. The RTL circuit that is obtained from the high-level synthesis process can be further optimized for power by applying power-reducing RTL transformations. This thesis presents macro-modeling and estimation techniques for switching
A Deep-Cutting-Plane Technique for Reverse Convex Optimization.
Moshirvaziri, K; Amouzegar, M A
2011-08-01
A large number of problems in engineering design and in many areas of social and physical sciences and technology lend themselves to particular instances of problems studied in this paper. Cutting-plane methods have traditionally been used as an effective tool in devising exact algorithms for solving convex and large-scale combinatorial optimization problems. Its utilization in nonconvex optimization has been also promising. A cutting plane, essentially a hyperplane defined by a linear inequality, can be used to effectively reduce the computational efforts in search of a global solution. Each cut is generated in order to eliminate a large portion of the search domain. Thus, a deep cut is intuitively superior in which it will exclude a larger set of extraneous points from consideration. This paper is concerned with the development of deep-cutting-plane techniques applied to reverse-convex programs. An upper bound and a lower bound for the optimal value are found, updated, and improved at each iteration. The algorithm terminates when the two bounds collapse or all the generated subdivisions have been fathomed. Finally, computational considerations and numerical results on a set of test problems are discussed. An illustrative example, walking through the steps of the algorithm and explaining the computational process, is presented. PMID:21296710
An optimal merging technique for high-resolution precipitation products
Houser, Paul
2011-01-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutions and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.
Optimized evaporation technique for leachate treatment: Small scale implementation.
Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz
2016-04-01
This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. PMID:26826455
Improved CEEMDAN and PSO-SVR Modeling for Near-Infrared Noninvasive Glucose Detection
Li, Xiaoli
2016-01-01
Diabetes is a serious threat to human health. Thus, research on noninvasive blood glucose detection has become crucial locally and abroad. Near-infrared transmission spectroscopy has important applications in noninvasive glucose detection. Extracting useful information and selecting appropriate modeling methods can improve the robustness and accuracy of models for predicting blood glucose concentrations. Therefore, an improved signal reconstruction and calibration modeling method is proposed in this study. On the basis of improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and correlative coefficient, the sensitive intrinsic mode functions are selected to reconstruct spectroscopy signals for developing the calibration model using the support vector regression (SVR) method. The radial basis function kernel is selected for SVR, and three parameters, namely, insensitive loss coefficient ε, penalty parameter C, and width coefficient γ, are identified beforehand for the corresponding model. Particle swarm optimization (PSO) is employed to optimize the simultaneous selection of the three parameters. Results of the comparison experiments using PSO-SVR and partial least squares show that the proposed signal reconstitution method is feasible and can eliminate noise in spectroscopy signals. The prediction accuracy of model using PSO-SVR method is also found to be better than that of other methods for near-infrared noninvasive glucose detection.
FPGA implementation of neuro-fuzzy system with improved PSO learning.
Karakuzu, Cihan; Karakaya, Fuat; Çavuşlu, Mehmet Ali
2016-07-01
This paper presents the first hardware implementation of neuro-fuzzy system (NFS) with its metaheuristic learning ability on field programmable gate array (FPGA). Metaheuristic learning of NFS for all of its parameters is accomplished by using the improved particle swarm optimization (iPSO). As a second novelty, a new functional approach, which does not require any memory and multiplier usage, is proposed for the Gaussian membership functions of NFS. NFS and its learning using iPSO are implemented on Xilinx Virtex5 xc5vlx110-3ff1153 and efficiency of the proposed implementation tested on two dynamic system identification problems and licence plate detection problem as a practical application. Results indicate that proposed NFS implementation and membership function approximation is as effective as the other approaches available in the literature but requires less hardware resources. PMID:27136666
Application of multivariable search techniques to structural design optimization
NASA Technical Reports Server (NTRS)
Jones, R. T.; Hague, D. S.
1972-01-01
Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission
What is Particle Swarm optimization? Application to hydrogeophysics (Invited)
NASA Astrophysics Data System (ADS)
Fernández Martïnez, J.; García Gonzalo, E.; Mukerji, T.
2009-12-01
Inverse problems are generally ill-posed. This yields lack of uniqueness and/or numerical instabilities. These features cause local optimization methods without prior information to provide unpredictable results, not being able to discriminate among the multiple models consistent with the end criteria. Stochastic approaches to inverse problems consist in shifting attention to the probability of existence of certain interesting subsurface structures instead of "looking for a unique model". Some well-known stochastic methods include genetic algorithms and simulated annealing. A more recent method, Particle Swarm Optimization, is a global optimization technique that has been successfully applied to solve inverse problems in many engineering fields, although its use in geosciences is still limited. Like all stochastic methods, PSO requires reasonably fast forward modeling. The basic idea behind PSO is that each model searches the model space according to its misfit history and the misfit of the other models of the swarm. PSO algorithm can be physically interpreted as a damped spring-mass system. This physical analogy was used to define a whole family of PSO optimizers and to establish criteria, based on the stability of particle swarm trajectories, to tune the PSO parameters: inertia, local and global accelerations. In this contribution we show application to different low-cost hydrogeophysical inverse problems: 1) a salt water intrusion problem using Vertical Electrical Soundings, 2) the inversion of Spontaneous Potential data for groundwater modeling, 3) the identification of Cole-Cole parameters for Induced Polarization data. We show that with this stochastic approach we are able to answer questions related to risk analysis, such as what is the depth of the salt intrusion with a certain probability, or giving probabilistic bounds for the water table depth. Moreover, these measures of uncertainty are obtained with small computational cost and time, allowing us a very
Design of vibration isolation systems using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.
Techniques for developing approximate optimal advanced launch system guidance
NASA Technical Reports Server (NTRS)
Feeley, Timothy S.; Speyer, Jason L.
1991-01-01
An extension to the authors' previous technique used to develop a real-time guidance scheme for the Advanced Launch System is presented. The approach is to construct an optimal guidance law based upon an asymptotic expansion associated with small physical parameters, epsilon. The trajectory of a rocket modeled as a point mass is considered with the flight restricted to an equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of this problem can be separated into primary effects due to thrust and gravitational forces, and perturbation effects which include the aerodynamic forces and the remaining inertial forces. An analytic solution to the reduced-order problem represented by the primary dynamics is possible. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form.
On improving storm surge forecasting using an adjoint optimal technique
NASA Astrophysics Data System (ADS)
Li, Yineng; Peng, Shiqiu; Yan, Jing; Xie, Lian
2013-12-01
A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290
Optimal exposure techniques for iodinated contrast enhanced breast CT
NASA Astrophysics Data System (ADS)
Glick, Stephen J.; Makeev, Andrey
2016-03-01
Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Technique to optimize magnetic response of gelatin coated magnetic nanoparticles.
Parikh, Nidhi; Parekh, Kinnari
2015-07-01
The paper describes the results of optimization of magnetic response for highly stable bio-functionalize magnetic nanoparticles dispersion. Concentration of gelatin during in situ co-precipitation synthesis was varied from 8, 23 and 48 mg/mL to optimize magnetic properties. This variation results in a change in crystallite size from 10.3 to 7.8 ± 0.1 nm. TEM measurement of G3 sample shows highly crystalline spherical nanoparticles with a mean diameter of 7.2 ± 0.2 nm and diameter distribution (σ) of 0.27. FTIR spectra shows a shift of 22 cm(-1) at C=O stretching with absence of N-H stretching confirming the chemical binding of gelatin on magnetic nanoparticles. The concept of lone pair electron of the amide group explains the mechanism of binding. TGA shows 32.8-25.2% weight loss at 350 °C temperature substantiating decomposition of chemically bind gelatin. The magnetic response shows that for 8 mg/mL concentration of gelatin, the initial susceptibility and saturation magnetization is the maximum. The cytotoxicity of G3 sample was assessed in Normal Rat Kidney Epithelial Cells (NRK Line) by MTT assay. Results show an increase in viability for all concentrations, the indicative probability of a stimulating action of these particles in the nontoxic range. This shows the potential of this technique for biological applications as the coated particles are (i) superparamagnetic (ii) highly stable in physiological media (iii) possibility of attaching other drug with free functional group of gelatin and (iv) non-toxic. PMID:26152511
PSO based PI controller design for a solar charger system.
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs). PMID:23766713
PSO Based PI Controller Design for a Solar Charger System
Yau, Her-Terng; Lin, Chih-Jer; Liang, Qin-Cheng
2013-01-01
Due to global energy crisis and severe environmental pollution, the photovoltaic (PV) system has become one of the most important renewable energy sources. Many previous studies on solar charger integrated system only focus on load charge control or switching Maximum Power Point Tracking (MPPT) and charge control modes. This study used two-stage system, which allows the overall portable solar energy charging system to implement MPPT and optimal charge control of Li-ion battery simultaneously. First, this study designs a DC/DC boost converter of solar power generation, which uses variable step size incremental conductance method (VSINC) to enable the solar cell to track the maximum power point at any time. The voltage was exported from the DC/DC boost converter to the DC/DC buck converter, so that the voltage dropped to proper voltage for charging the battery. The charging system uses constant current/constant voltage (CC/CV) method to charge the lithium battery. In order to obtain the optimum PI charge controller parameters, this study used intelligent algorithm to determine the optimum parameters. According to the simulation and experimental results, the control parameters resulted from PSO have better performance than genetic algorithms (GAs). PMID:23766713
Optimization technique for problems with an inequality constraint
NASA Technical Reports Server (NTRS)
Russell, K. J.
1972-01-01
General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.
Hybrid optimization methods for Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Datta, D.; Sen, M. K.
2014-12-01
FWI is slowly becoming the mainstream method to estimate velocity models of the subsurface from seismic data. Typically it makes use of a gradient descent approach in which a model update is computed by back propagating the residual seismograms and cross correlating with the forward propagating wavefields at each grid point in the subsurface model. FWI is a local optimization technique, which requires the starting model to be very close to the true model. Because the objective function is multimodal with many local minima, the requirement of good starting model becomes essential. A starting model is generated using travel time tomography. We propose two hybrid FWI algorithms one of which generates a very good starting model for a conventional FWI and the other, which works with a population of models uses gradient information from multiple starting locations in guiding the search. The first approach uses a sparse parameterization of model space using non-oscillatory splines, whose coeffiencts are estimated using an optimization algorithm like very fast simulated annealing (VFSA) by minimizing the misfit between the observed and synthetic data. The estimated velocity model is then used as a starting model for gradient-based FWI. This is done in the shot domain by converting the end-on marine geometry to a split spread geometry using the principle of reciprocity. The second approach is to uses an alternate global optimization algorithm called particle swarm optimization (PSO) where PSO update rules are applied. However, we employ a new gradient guided PSO that exploits the gradient information as well. This approach avoids the local minima and converges faster than a conventional PSO. We demonstrate our methods with application to 2D marine data sets from offshore India. Each line comprises over 1000 shots; our hybrid methods produce geologically meaningful velocity models fairly rapidly on a GPU cluster. We show that starting with the hybrid model gives a much
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutions and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.
Optimization of fast dissolving etoricoxib tablets prepared by sublimation technique.
Patel, D M; Patel, M M
2008-01-01
The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 3(2) full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q(30)) and dissolution efficiency after 30 min (DE(30)). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent. PMID:20390084
Optimization of Fast Dissolving Etoricoxib Tablets Prepared by Sublimation Technique
Patel, D. M.; Patel, M. M.
2008-01-01
The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 32 full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q30) and dissolution efficiency after 30 min (DE30). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent. PMID:20390084
Calibration of Semi-analytic Models of Galaxy Formation Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Ruiz, Andrés N.; Cora, Sofía A.; Padilla, Nelson D.; Domínguez, Mariano J.; Vega-Martínez, Cristian A.; Tecce, Tomás E.; Orsi, Álvaro; Yaryura, Yamila; García Lambas, Diego; Gargiulo, Ignacio D.; Muñoz Arancibia, Alejandra M.
2015-03-01
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-01-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
Augmented Lagrangian Particle Swarm Optimization in Mechanism Design
NASA Astrophysics Data System (ADS)
Sedlaczek, Kai; Eberhard, Peter
The problem of optimizing nonlinear multibody systems is in general nonlinear and nonconvex. This is especially true for the dimensional synthesis process of rigid body mechanisms, where often only local solutions might be found with gradient-based optimization methods. An attractive alternative for solving such multimodal optimization problems is the Particle Swarm Optimization (PSO) algorithm. This stochastic solution technique allows a derivative-free search for a global solution without the need for any initial design. In this work, we present an extension to the basic PSO algorithm in order to solve the problem of dimensional synthesis with nonlinear equality and inequality constraints. It utilizes the Augmented Lagrange Multiplier Method in combination with an advanced non-stationary penalty function approach that does not rely on excessively large penalty factors for sufficiently accurate solutions. Although the PSO method is even able to solve nonsmooth and discrete problems, this augmented algorithm can additionally calculate accurate Lagrange multiplier estimates for differentiable formulations, which are helpful in the analysis process of the optimization results. We demonstrate this method and show its very promising applicability to the constrained dimensional synthesis process of rigid body mechanisms.
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng
2016-02-01
Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.
Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef
2009-12-01
In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Optimization techniques in molecular structure and function elucidation.
Sahinidis, Nikolaos V
2009-12-01
This paper discusses recent optimization approaches to the protein side-chain prediction problem, protein structural alignment, and molecular structure determination from X-ray diffraction measurements. The machinery employed to solve these problems has included algorithms from linear programming, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of these problems are purely continuous in nature. Yet, to this date, they have been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. The main purpose of the paper is to offer an introduction and motivate further systems approaches to these problems. PMID:20160866
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Liu, Jin-Nan; Chen, Yen-Lin; Chang, Wen-Yen; Hsieh, Tung-Ju; Huang, Bormin
2014-01-01
In recent years, satellite imaging technologies have resulted in an increased number of bands acquired by hyperspectral sensors, greatly advancing the field of remote sensing. Accordingly, owing to the increasing number of bands, band selection in hyperspectral imagery for dimension reduction is important. This paper presents a framework for band selection in hyperspectral imagery that uses two techniques, referred to as particle swarm optimization (PSO) band selection and the impurity function band prioritization (IFBP) method. With the PSO band selection algorithm, highly correlated bands of hyperspectral imagery can first be grouped into modules to coarsely reduce high-dimensional datasets. Then, these highly correlated band modules are analyzed with the IFBP method to finely select the most important feature bands from the hyperspectral imagery dataset. However, PSO band selection is a time-consuming procedure when the number of hyperspectral bands is very large. Hence, this paper proposes a parallel computing version of PSO, namely parallel PSO (PPSO), using a modern graphics processing unit (GPU) architecture with NVIDIA's compute unified device architecture technology to improve the computational speed of PSO processes. The natural parallelism of the proposed PPSO lies in the fact that each particle can be regarded as an independent agent. Parallel computation benefits the algorithm by providing each agent with a parallel processor. The intrinsic parallel characteristics embedded in PPSO are, therefore, suitable for parallel computation. The effectiveness of the proposed PPSO is evaluated through the use of airborne visible/infrared imaging spectrometer hyperspectral images. The performance of PPSO is validated using the supervised K-nearest neighbor classifier. The experimental results demonstrate that the proposed PPSO/IFBP band selection method can not only improve computational speed, but also offer a satisfactory classification performance.
New video projection control room is OK with PSO
Buttress, J.
1996-11-01
Public Service Company of Oklahoma (PSO) has 473,000 electricity customers across the state. While power failures are unquestionably an inconvenience to residential customers and a loss of income to the utility, power outages can have serious financial effects on the region`s business community. Oil and natural gas producers, pipelines, aircraft and aerospace companies, farms, ranches and wood product producers rely on PSO to supply them with electricity. Historically, every supplier of electricity experiences and is responsible for correcting power supply failures regardless of circumstances. Therefore, to successfully serve its customers, PSO strives to identify three key pieces of information for each report of trouble it receives: Is the power off? If so, why? Approximately when will it be restored?
NASA Astrophysics Data System (ADS)
Lin, Juan; Liu, Chenglian; Guo, Yongning
2014-10-01
The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1987-01-01
Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.
Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.
2000-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local
Design of high speed proprotors using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Mccarthy, Thomas R.; Chattopadhyay, Aditi
1992-01-01
An integrated, multiobjective optimization procedure is developed for the design of high speed proprotors with the coupling of aerodynamic, dynamic, aeroelastic, and structural criteria. The objectives are to maximize propulsive efficiency in high speed cruise and rotor figure of merit in hover. Constraints are imposed on rotor blade aeroelastic stability in cruise and on total blade weight. Two different multiobjective formulation procedures, the Min summation of beta and the K-S function approaches are used to formulate the two-objective optimization problems.
Optimizing Basic French Skills Utilizing Multiple Teaching Techniques.
ERIC Educational Resources Information Center
Skala, Carol
This action research project examined the impact of foreign language teaching techniques on the language acquisition and retention of 19 secondary level French I students, focusing on student perceptions of the effectiveness and ease of four teaching techniques: total physical response, total physical response storytelling, literature approach,…
Towards the novel reasoning among particles in PSO by the use of RDF and SPARQL.
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez; Fister, Iztok
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further. PMID:24987725
Pourjafari, Ebrahim; Mojallali, Hamed
2011-04-01
Voltage stability is one of the most challenging concerns that power utilities are confronted with, and this paper proposes a voltage control scheme based on Model Predictive Control (MPC) to overcome this kind of instability. Voltage instability has a close relation with the adequacy of reactive power and the response of Under Load Tap Changers (ULTCs) to the voltage drop after the occurrence of a contingency. Therefore, the proposed method utilizes reactive power injection and tap changing to avoid voltage collapse. Considering discrete nature of the changes in the tap ratio and also in the reactive power injected by capacitor banks, the search area for the optimizer of MPC will be an integer area; consequently, a modified discrete multi-valued Particle Swarm Optimization (PSO) is considered to perform this optimization. Simulation results of applying the proposed control scheme to a 4-bus system confirm its capability to prevent voltage collapse. PMID:21251650
Towards the Novel Reasoning among Particles in PSO by the Use of RDF and SPARQL
Fister, Iztok; Yang, Xin-She; Ljubič, Karin; Fister, Dušan; Brest, Janez
2014-01-01
The significant development of the Internet has posed some new challenges and many new programming tools have been developed to address such challenges. Today, semantic web is a modern paradigm for representing and accessing knowledge data on the Internet. This paper tries to use the semantic tools such as resource definition framework (RDF) and RDF query language (SPARQL) for the optimization purpose. These tools are combined with particle swarm optimization (PSO) and the selection of the best solutions depends on its fitness. Instead of the local best solution, a neighborhood of solutions for each particle can be defined and used for the calculation of the new position, based on the key ideas from semantic web domain. The preliminary results by optimizing ten benchmark functions showed the promising results and thus this method should be investigated further. PMID:24987725
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Support vector machine based on adaptive acceleration particle swarm optimization.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
Asynchronous global optimization techniques for medium and large inversion problems
Pereyra, V.; Koshy, M.; Meza, J.C.
1995-04-01
We discuss global optimization procedures adequate for seismic inversion problems. We explain how to save function evaluations (which may involve large scale ray tracing or other expensive operations) by creating a data base of information on what parts of parameter space have already been inspected. It is also shown how a correct parallel implementation using PVM speeds up the process almost linearly with respect to the number of processors, provided that the function evaluations are expensive enough to offset the communication overhead.
Optimization techniques for OpenCL-based linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Fox, Paul; Humphrey, John; Kuller, Aryeh; Kelmelis, Eric; Prather, Dennis W.
2014-06-01
The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS
Approach to analytically minimize the LCD moiré by image-based particle swarm optimization.
Tsai, Yu-Lin; Tien, Chung-Hao
2015-10-01
In this paper, we proposed a methodology to optimize the parametric window of a liquid crystal display (LCD) system, whose visual performance was deteriorated by the pixel moiré arising in between multiple periodic structures. Conventional analysis and minimization of moiré patterns are limited by few parameters. With the proposed image-based particle swarm optimization (PSO), we enable a multivariable optimization at the same time. A series of experiments was conducted to validate the methodology. Due to its versatility, the proposed technique will certainly have a promising impact on the fast optimization in LCD design with more complex configuration. PMID:26479663
An Optimal Cell Detection Technique for Automated Patch Clamping
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2004-01-01
While there are several hardware techniques for the automated patch clamping of cells that describe the equipment apparatus used for patch clamping, very few explain the science behind the actual technique of locating the ideal cell for a patch clamping procedure. We present a machine vision approach to patch clamping cell selection by developing an intelligent algorithm technique that gives the user the ability to determine the good cell to patch clamp in an image within one second. This technique will aid the user in determining the best candidates for patch clamping and will ultimately save time, increase efficiency and reduce cost. The ultimate goal is to combine intelligent processing with instrumentation and controls in order to produce a complete turnkey automated patch clamping system capable of accurately and reliably patch clamping cells with a minimum amount of human intervention. We present a unique technique that identifies good patch clamping cell candidates based on feature metrics of a cell's (x, y) position, major axis length, minor axis length, area, elongation, roundness, smoothness, angle of orientation, thinness and whether or not the cell is only particularly in the field of view. A patent is pending for this research.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Application of optimal data assimilation techniques in oceanography
Miller, R.N.
1996-12-31
Application of optimal data assimilation methods in oceanography is, if anything, more important than it is in numerical weather prediction, due to the sparsity of data. Here, a general framework is presented and practical examples taken from the author`s work are described, with the purpose of conveying to the reader some idea of the state of the art of data assimilation in oceanography. While no attempt is made to be exhaustive, references to other lines of research are included. Major challenges to the community include design of statistical error models and handling of strong nonlinearity.
Decomposition technique and optimal trajectories for the aeroassisted flight experiment
NASA Technical Reports Server (NTRS)
Miele, A.; Wang, T.; Deaton, A. W.
1990-01-01
An actual geosynchronous Earth orbit-to-low Earth orbit (GEO-to-LEO) transfer is considered with reference to the aeroassisted flight experiment (AFE) spacecraft, and optimal trajectories are determined by minimizing the total characteristic velocity. The optimization is performed with respect to the time history of the controls (angle of attack and angle of bank), the entry path inclination and the flight time being free. Two transfer maneuvers are considered: direct ascent (DA) to LEO and indirect ascent (IA) to LEO via parking Earth orbit (PEO). By taking into account certain assumptions, the complete system can be decoupled into two subsystems: one describing the longitudinal motion and one describing the lateral motion. The angle of attack history, the entry path inclination, and the flight time are determined via the longitudinal motion subsystem. In this subsystem, the difference between the instantaneous bank angle and a constant bank angle is minimized in the least square sense subject to the specified orbital inclination requirement. Both the angles of attack and the angle of bank are shown to be constant. This result has considerable importance in the design of nominal trajectories to be used in the guidance of AFE and aeroassisted orbital transfer (AOT) vehicles.
Optimized distortion correction technique for echo planar imaging.
Chen, N K; Wyrwicz, A M
2001-03-01
A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. PMID:11241714
Preliminary research on abnormal brain detection by wavelet-energy and quantum- behaved PSO.
Zhang, Yudong; Ji, Genlin; Yang, Jiquan; Wang, Shuihua; Dong, Zhengchao; Phillips, Preetha; Sun, Ping
2016-04-29
It is important to detect abnormal brains accurately and early. The wavelet-energy (WE) was a successful feature descriptor that achieved excellent performance in various applications; hence, we proposed a WE based new approach for automated abnormal detection, and reported its preliminary results in this study. The kernel support vector machine (KSVM) was used as the classifier, and quantum-behaved particle swarm optimization (QPSO) was introduced to optimize the weights of the SVM. The results based on a 5 × 5-fold cross validation showed the performance of the proposed WE + QPSO-KSVM was superior to ``DWT + PCA + BP-NN'', ``DWT + PCA + RBF-NN'', ``DWT + PCA + PSO-KSVM'', ``WE + BPNN'', ``WE +$ KSVM'', and ``DWT $+$ PCA $+$ GA-KSVM'' w.r.t. sensitivity, specificity, and accuracy. The work provides a novel means to detect abnormal brains with excellent performance. PMID:27163327
A technique for optimizing the design of power semiconductor devices
NASA Technical Reports Server (NTRS)
Schlegel, E. S.
1976-01-01
A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.
PSO-based methods for medical image registration and change assessment of pigmented skin
NASA Astrophysics Data System (ADS)
Kacenjar, Steve; Zook, Matthew; Balint, Michael
2011-03-01
's back topography. Since the skin is a deformable membrane, this process only provides an initial condition for subsequent refinements in aligning the localized topography of the skin. To achieve a refined enhancement, a Particle Swarm Optimizer (PSO) is used to optimally determine the local camera models associated with a generalized geometric transform. Here the optimization process is driven using the minimization of entropy between the multiple time-separated images. Once the camera models are corrected for local skin deformations, the images are compared using both pixel-based and regional-based methods. Limits on the detectability of change are established by the fidelity to which the algorithm corrects for local skin deformation and background alterations. These limits provide essential information in establishing early-warning thresholds for Melanoma detection. Key to this work is the development of a PSO alignment algorithm to perform the refined alignment in local skin topography between the time sequenced imagery (TSI). Test and validation of this alignment process is achieved using a forward model producing known geometric artifacts in the images and afterwards using a PSO algorithm to demonstrate the ability to identify and correct for these artifacts. Specifically, the forward model introduces local translational, rotational, and magnification changes within the image. These geometric modifiers are expected during TSI acquisition because of logistical issues to precisely align the patient to the image recording geometry and is therefore of paramount importance to any viable image registration system. This paper shows that the PSO alignment algorithm is effective in autonomously determining and mitigating these geometric modifiers. The degree of efficacy is measured by several statistically and morphologically based pre-image filtering operations applied to the TSI imagery before applying the PSO alignment algorithm. These trade studies show that global
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
An Enhanced Multi-Objective Optimization Technique for Comprehensive Aerospace Design
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
2000-01-01
An enhanced multiobjective formulation technique, capable of emphasizing specific objective functions during the optimization process, has been demonstrated on a complex multidisciplinary design application. The Kreisselmeier-Steinhauser (K-S) function approach, which has been used successfully in a variety of multiobjective optimization problems, has been modified using weight factors which enables the designer to emphasize specific design objectives during the optimization process. The technique has been implemented in two distinctively different problems. The first is a classical three bar truss problem and the second is a high-speed aircraft (a doubly swept wing-body configuration) application in which the multiobjective optimization procedure simultaneously minimizes the sonic boom and the drag-to-lift ratio (C(sub D)/C(sub L)) of the aircraft while maintaining the lift coefficient within prescribed limits. The results are compared with those of an equally weighted K-S multiobjective optimization. Results demonstrate the effectiveness of the enhanced multiobjective optimization procedure.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Optimization of hydrostatic transmissions by means of virtual instrumentation technique
NASA Astrophysics Data System (ADS)
Ion Guta, Dragos Daniel; Popescu, Teodor Costinel; Dumitrescu, Catalin
2010-11-01
Obtaining mathematical models, as close as possible to physical phenomena which are intended to be replicated or improved, help us in deciding how to optimize them. The introduction of computers in monitoring and controlling processes caused changes in technological systems. With support from the methods for identification of processes and from the power of numerical computing equipment, researchers and designers can shorten the period for development of applications in various fields by generating a solution as close as possible to reality, since the design stage [1]. The paper presents a hybrid solution of modeling / simulation of a hydrostatic transmission with mixed adjustment. For simulation and control of the examined process we have used two distinct environments, AMESim and LabVIEW. The proposed solution allows coupling of the system's model to the software control modules developed using virtual instrumentation. Simulation network of the analyzed system was "tuned" and validated by an actual model of the process. This paper highlights some aspects regarding energy and functional advantages of hydraulic transmissions based on adjustable volumetric machines existing in their primary and secondary sectors [2].
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-01-01
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910
Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong
2015-01-01
Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910
Calculation of free fall trajectories based on numerical optimization techniques
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a means of computing free-fall (nonthrusting) trajectories from one specified point in the solar system to another specified point in the solar system in a given amount of time was studied. The problem is that of solving a two-point boundary value problem for which the initial slope is unknown. Two standard methods of attack exist for solving two-point boundary value problems. The first method is known as the initial value or shooting method. The second method of attack for two-point boundary value problems is to approximate the nonlinear differential equations by an appropriate linearized set. Parts of both boundary value problem solution techniques described above are used. A complete velocity history is guessed such that the corresponding position history satisfies the given boundary conditions at the appropriate times. An iterative procedure is then followed until the last guessed velocity history and the velocity history obtained from integrating the acceleration history agree to some specified tolerance everywhere along the trajectory.
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Hybrid Bacterial Foraging and Particle Swarm Optimization for detecting Bundle Branch Block.
Kora, Padmavathi; Kalva, Sri Ramakrishna
2015-01-01
Abnormal cardiac beat identification is a key process in the detection of heart diseases. Our present study describes a procedure for the detection of left and right bundle branch block (LBBB and RBBB) Electrocardiogram (ECG) patterns. The electrical impulses that control the cardiac beat face difficulty in moving inside the heart. This problem is termed as bundle branch block (BBB). BBB makes it harder for the heart to pump blood effectively through the heart circulatory system. ECG feature extraction is a key process in detecting heart ailments. Our present study comes up with a hybrid method combining two heuristic optimization methods: Bacterial Forging Optimization (BFO) and Particle Swarm Optimization (PSO) for the feature selection of ECG signals. One of the major controlling forces of BFO algorithm is the chemotactic movement of a bacterium that models a test solution. The chemotaxis process of the BFO depends on random search directions which may lead to a delay in achieving the global optimum solution. The hybrid technique: Bacterial Forging-Particle Swarm Optimization (BFPSO) incorporates the concepts from BFO and PSO and it creates individuals in a new generation. This BFPSO method performs local search through the chemotactic movement of BFO and the global search over the entire search domain is accomplished by a PSO operator. The BFPSO feature values are given as the input for the Levenberg-Marquardt Neural Network classifier. PMID:26361582
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
Pump-and-treat optimization using analytic element method flow models
NASA Astrophysics Data System (ADS)
Matott, L. Shawn; Rabideau, Alan J.; Craig, James R.
2006-05-01
Plume containment using pump-and-treat (PAT) technology continues to be a popular remediation technique for sites with extensive groundwater contamination. As such, optimization of PAT systems, where cost is minimized subject to various remediation constraints, is the focus of an important and growing body of research. While previous pump-and-treat optimization (PATO) studies have used discretized (finite element or finite difference) flow models, the present study examines the use of analytic element method (AEM) flow models. In a series of numerical experiments, two PATO problems adapted from the literature are optimized using a multi-algorithmic optimization software package coupled with an AEM flow model. The experiments apply several different optimization algorithms and explore the use of various pump-and-treat cost and constraint formulations. The results demonstrate that AEM models can be used to optimize the number, locations and pumping rates of wells in a pump-and-treat containment system. Furthermore, the results illustrate that a total outflux constraint placed along the plume boundary can be used to enforce plume containment. Such constraints are shown to be efficient and reliable alternatives to conventional particle tracking and gradient control techniques. Finally, the particle swarm optimization (PSO) technique is identified as an effective algorithm for solving pump-and-treat optimization problems. A parallel version of the PSO algorithm is shown to have linear speedup, suggesting that the algorithm is suitable for application to problems that are computationally demanding and involve large numbers of wells.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan
2016-01-01
An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C₆H₆), toluene (C₇H₈), formaldehyde (CH₂O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms' applications in all E-nose application areas. PMID
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
A damage identification technique based on embedded sensitivity analysis and optimization processes
NASA Astrophysics Data System (ADS)
Yang, Chulho; Adams, Douglas E.
2014-07-01
A vibration based structural damage identification method, using embedded sensitivity functions and optimization algorithms, is discussed in this work. The embedded sensitivity technique requires only measured or calculated frequency response functions to obtain the sensitivity of system responses to each component parameter. Therefore, this sensitivity analysis technique can be effectively used for the damage identification process. Optimization techniques are used to minimize the difference between the measured frequency response functions of the damaged structure and those calculated from the baseline system using embedded sensitivity functions. The amount of damage can be quantified directly in engineering units as changes in stiffness, damping, or mass. Various factors in the optimization process and structural dynamics are studied to enhance the performance and robustness of the damage identification process. This study shows that the proposed technique can improve the accuracy of damage identification with less than 2 percent error of estimation.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Fournier, René; Mohareb, Amir
2016-01-14
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV. PMID:26772561
NASA Astrophysics Data System (ADS)
Fournier, René; Mohareb, Amir
2016-01-01
We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.
NASA Astrophysics Data System (ADS)
Agarwal, Reema; Köhl, Armin; Stammer, Detlef
2013-04-01
We present an application of a multivariate parameter optimization technique to a global primitive equation Atmospheric GCM. The technique is based upon the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, in which gradients of the objective function are approximated. This technique has some advantages over other optimization procedures (such as Green's function or the Adjoint methods) like robustness to noise in the objective function and ability to find the actual minimum in case of multiple minima. Another useful feature of the technique is its simplicity and cost effectiveness. The atmospheric GCM used is the coarse resolution PLAnet SIMulator (PLASIM). In order to identify the parameters to be used in the optimization procedure, a series of sensitivity experiments with 12 different parameters was performed and subsequently 5 parameters related to cloud radiation parameterization to which the GCM was highly sensitive were finally selected. The optimization technique is applied and the selected parameters were simultaneously tuned and tested for a period of 1-year GCM integrations. The performance of the technique is judged by the behavior of model's cost function, which includes temperature, precipitation, humidity and flux contributions. The method is found to be useful for reducing the model's cost function against both identical twin data as well as ECMWF ERA-40 reanalysis data.
Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique
NASA Astrophysics Data System (ADS)
Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.
2014-04-01
3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.
Hashim, H. A.; Abido, M. A.
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
Hashim, H A; Abido, M A
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
Srinivasan, Thenmozhi; Palanisamy, Balasubramanie
2015-01-01
Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. PMID:26495413
Srinivasan, Thenmozhi; Palanisamy, Balasubramanie
2015-01-01
Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. PMID:26495413
NASA Astrophysics Data System (ADS)
Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji
2016-07-01
This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
NASA Astrophysics Data System (ADS)
ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi
2013-08-01
A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted
Optimization of Heat-Sink Cooling Structure in EAST with Hydraulic Expansion Technique
NASA Astrophysics Data System (ADS)
Xu, Tiejun; Huang, Shenghong; Xie, Han; Song, Yuntao; Zhan, Ping; Ji, Xiang; Gao, Daming
2011-12-01
Considering utilization of the original chromium-bronze material, two processing techniques including hydraulic expansion and high temperature vacuum welding were proposed for the optimization of heat-sink structure in EAST. The heat transfer performance of heat-sink with or without cooling tube was calculated and different types of connection between tube and heat-sink were compared by conducting a special test. It is shown from numerical analysis that the diameter of heat-sink channel can be reduced from 12 mm to 10 mm. Compared with the original sample, the thermal contact resistance between tube and heat-sink for welding sample can reduce the heat transfer performance by 10%, while by 20% for the hydraulic expansion sample. However, the welding technique is more complicated and expensive than hydraulic expansion technique. Both the processing technique and the heat transfer performance of heat-sink prototype should be further considered for the optimization of heat-sink structure in EAST.
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
NASA Astrophysics Data System (ADS)
Tabakov, P. Y.; Walker, M.
2007-01-01
Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an in-depth analysis of the problem, and then a technique for determining the optimal design of engineering structures, with manufacturing tolerances in the design variables accounted for, is proposed and demonstrated. The examples used to demonstrate the technique involve the design optimization of simple fibre-reinforced laminated composite structures. The technique is simple, easy to implement and, at the same time, very efficient. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. In addition, the technique is non-probabilistic. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique. The numerical examples presented in the article deal with buckling load design optimization of an laminated angle ply plate, and evaluation of the maximum burst pressure in a thick laminated anisotropic pressure vessel. Both examples clearly demonstrate the impact of manufacturing tolerances on the overall performance of a structure and emphasize the importance of accounting for such tolerances in the design optimization phase. This is particularly true of the pressure vessel. The results show that when the example tolerances are accounted for, the maximum design pressure is reduced by 60.2% (in the case of a single layer vessel), and when five layers are specified, if the nominal fibre orientations are implemented and the example tolerances are incurred during fabrication, the actual design pressure could be 64% less than predicted.
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Matsuki, Junya; Kanai, Genshin
Open access to electric power transmission networks has been carried out in order to foster generation competition and customer choice in the worldwide. When several PPSs request to simultaneously supply power to customers based on bilateral contracts, it is expected that transmission network accepts amounts of wheeled power requested by the PPSs as much as possible. It is possible to maximize total requested wheeled power by controlling power flow through transmission lines. It is well known that FACTS device is available to control line flow flexibly. In this paper, in order to maximize total wheeled power simultaneously requested by several PPSs, the authors propose an algorithm to determine the optimal reactance of TCSC (one of FACTS devices). The proposed algorithm is based on Particle Swarm Optimization (PSO), which is one of optimization methods based on swarm intelligence. In the proposed algorithm, PSO is improved to enhance ability of searching global minimum by giving different characteristic to behavior of each agent. In order to check the validity of the proposed method, numerical results are shown for 6 and IEEE 30 bus system models.
Particle Swarm Optimization with Double Learning Patterns
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Particle Swarm Optimization with Double Learning Patterns.
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
A technique optimization protocol and the potential for dose reduction in digital mammography
Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan
2010-03-15
Digital mammography requires revisiting techniques that have been optimized for prior screen/film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat Novation{sup DR}, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23-35 kVp), target/filter combinations (Mo-Mo and W-Rh), breast-equivalent plastic in various thicknesses (2-8 cm) and densities (100% adipose, 50% adipose/50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W-Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W-Rh technique compared to standard Mo-Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose/50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts.
A technique optimization protocol and the potential for dose reduction in digital mammography
Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan
2010-01-01
Digital mammography requires revisiting techniques that have been optimized for prior screen∕film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat NovationDR, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23–35 kVp), target∕filter combinations (Mo–Mo and W–Rh), breast-equivalent plastic in various thicknesses (2–8 cm) and densities (100% adipose, 50% adipose∕50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W–Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W–Rh technique compared to standard Mo–Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose∕50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts. PMID:20384232
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
NASA Astrophysics Data System (ADS)
Papila, Nilay Uzgoren
Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. This dissertation focuses on methodology and application of employing optimization techniques, with the neural network (NN) and polynomial-based response surface method (RSM), for supersonic turbine optimization. The research is relevant to NASA's reusable launching vehicle initiatives. It is demonstrated that accuracy of the response surface (RS) approximations can be improved with combined utilization of the NN and polynomial techniques, and higher emphases on data in regions of interests. The design of experiment methodology is critical while performing optimization in efficient and effective manners. In physical applications, both preliminary design and detailed shape design optimization are investigated. For preliminary design level, single-, two-, and three-stage turbines are considered with the number of design variables increasing from six to 11 and then to 15, in accordance with the number of stages. A major goal of the preliminary optimization effort is to balance the desire of maximizing aerodynamic performance and minimizing weight. To ascertain required predictive capability of the RSM, a two-level domain refinement approach (windowing) has been adopted. The accuracy of the predicted optimal design points based on this strategy is shown to be satisfactory. The results indicate that the two-stage turbine is the optimum configuration with the higher efficiency corresponding to smaller weights. It is demonstrated that the criteria for selecting the database exhibit significant impact on the efficiency and effectiveness of the construction of the response surface. Based on the optimized preliminary design outcome, shape optimization is performed for vanes and blades of a two-stage supersonic turbine, involving O(10) design variables. It is demonstrated that a major merit of the RS-based optimization approach is that it enables one
NASA Astrophysics Data System (ADS)
Wang, Hu; Li, Enying; Li, G. Y.
2011-03-01
This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.
Lenhart, S. |; Protopopescu, V.; Yong, J.
1997-12-31
The authors apply optimal control techniques to find approximate solutions to an inverse problem for the acoustic wave equation. The inverse problem (assumed here to have a solution) is to determine the boundary reflection coefficient from partial measurements of the acoustic signal. The sought reflection coefficient is treated as a control and the goal--quantified by an approximate functional--is to drive the model solution close to the experimental data by adjusting this coefficient. The problem is solved by finding the optimal control that minimizes the approximate functional. Then by driving the cost of the control to zero one proves that the corresponding sequence of optimal controls represents a converging sequence of estimates for the solution of the inverse problem. Compared to classical regularization methods (e.g., Tikhonov coupled with optimization schemes), their approach yields: (1) a systematic procedure to solve inverse problems of identification type and (ii) an explicit expression for the approximations of the solution.
An Innovative Method of Teaching Electronic System Design with PSoC
ERIC Educational Resources Information Center
Ye, Zhaohui; Hua, Chengying
2012-01-01
Programmable system-on-chip (PSoC), which provides a microprocessor and programmable analog and digital peripheral functions in a single chip, is very convenient for mixed-signal electronic system design. This paper presents the experience of teaching contemporary mixed-signal electronic system design with PSoC in the Department of Automation,…
76 FR 60495 - Patient Safety Organizations: Voluntary Relinquishment From Illinois PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... from the Illinois PSO of its status as a Patient Safety Organization (PSO). The Patient Safety and... PSOs, which are entities or component organizations whose mission and primary activity is to...
Fuel optimal low thrust rendezvous with outer planets via gravity assist
NASA Astrophysics Data System (ADS)
Guo, TieDing; Jiang, FangHua; Baoyin, HeXi; LI, JunFeng
2011-04-01
Low thrust propulsion and gravity assist (GA) are among the most promising techniques for deep space explorations. In this paper the two techniques are combined and treated comprehensively, both on modeling and numerical techniques. Fuel optimal orbit rendezvous via multiple GA is first formulated as optimal guidance with multiple interior constraints and then the optimal necessary conditions, various transversality conditions and stationary conditions are derived by Pontryagin's Maximum Principle (PMP). Finally the initial orbit rendezvous problem is transformed into a multiple point boundary value problem (MPBVP). Homotopic technique combined with random searching globally and Particle Swarm Optimization (PSO), is adopted to handle the numerical difficulty in solving the above MPBVP by single shooting method. Two scenarios in the end show the merits of the present approach.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Evolutionary techniques for sensor networks energy optimization in marine environmental monitoring
NASA Astrophysics Data System (ADS)
Grimaccia, Francesco; Johnstone, Ron; Mussetta, Marco; Pirisi, Andrea; Zich, Riccardo E.
2012-10-01
The sustainable management of coastal and offshore ecosystems, such as for example coral reef environments, requires the collection of accurate data across various temporal and spatial scales. Accordingly, monitoring systems are seen as central tools for ecosystem-based environmental management, helping on one hand to accurately describe the water column and substrate biophysical properties, and on the other hand to correctly steer sustainability policies by providing timely and useful information to decision-makers. A robust and intelligent sensor network that can adjust and be adapted to different and changing environmental or management demands would revolutionize our capacity to wove accurately model, predict, and manage human impacts on our coastal, marine, and other similar environments. In this paper advanced evolutionary techniques are applied to optimize the design of an innovative energy harvesting device for marine applications. The authors implement an enhanced technique in order to exploit in the most effective way the uniqueness and peculiarities of two classical optimization approaches, Particle Swarm Optimization and Genetic Algorithms. Here, this hybrid procedure is applied to a power buoy designed for marine environmental monitoring applications in order to optimize the recovered energy from sea-wave, by selecting the optimal device configuration.
Optimization technique for improved microwave transmission from multi-solar power satellites
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Kerwin, E. M.
1982-01-01
An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.
Optimization technique for improved microwave transmission from multi-solar power satellites
Arndt, G.D.; Kerwin, E.M.
1982-08-01
An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.
NASA Astrophysics Data System (ADS)
Yukawa, Masahiro; Murakoshi, Noriaki; Yamada, Isao
2006-12-01
In stereophonic acoustic echo cancellation (SAEC) problem, fast and accurate tracking of echo path is strongly required for stable echo cancellation. In this paper, we propose a class of efficient fast SAEC schemes with linear computational complexity (with respect to filter length). The proposed schemes are based on pairwise optimal weight realization (POWER) technique, thus realizing a "best" strategy (in the sense of pairwise and worst-case optimization) to use multiple-state information obtained by preprocessing. Numerical examples demonstrate that the proposed schemes significantly improve the convergence behavior compared with conventional methods in terms of system mismatch as well as echo return loss enhancement (ERLE).
NASA Astrophysics Data System (ADS)
Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon
2009-02-01
Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Liu Wei; Li Yupeng; Li Xiaoqiang; Cao Wenhua; Zhang Xiaodong
2012-06-15
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique's sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans' sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT's sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the
Optimization of brushless direct current motor design using an intelligent technique.
Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay
2015-07-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance. PMID:25841938
Eversion-Inversion Labral Repair and Reconstruction Technique for Optimal Suction Seal
Moreira, Brett; Pascual-Garrido, Cecilia; Chadayamurri, Vivek; Mei-Dan, Omer
2015-01-01
Labral tears are a significant cause of hip pain and are currently the most common indication for hip arthroscopy. Compared with labral debridement, labral repair has significantly better outcomes in terms of both daily activities and athletic pursuits in the setting of femoral acetabular impingement. The classic techniques described in the literature for labral repair all use loop or pass-through intrasubstance labral sutures to achieve a functional hip seal. This hip seal is important for hip stability and optimal joint biomechanics, as well as in the prevention of long-term osteoarthritis. We describe a novel eversion-inversion intrasubstance suturing technique for labral repair and reconstruction that can assist in restoration of the native labrum position by re-creating an optimal seal around the femoral head. PMID:26870648
An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Watts, Stephen R.; Garg, Sanjay
1995-01-01
This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling
Todorov, E. I.
2007-03-21
The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.
Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong
2012-01-01
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the
The L_infinity constrained global optimal histogram equalization technique for real time imaging
NASA Astrophysics Data System (ADS)
Ren, Qiongwei; Niu, Yi; Liu, Lin; Jiao, Yang; Shi, Guangming
2015-08-01
Although the current imaging sensors can achieve 12 or higher precision, the current display devices and the commonly used digital image formats are still only 8 bits. This mismatch causes significant waste of the sensor precision and loss of information when storing and displaying the images. For better usage of the precision-budget, tone mapping operators have to be used to map the high-precision data into low-precision digital images adaptively. In this paper, the classic histogram equalization tone mapping operator is reexamined in the sense of optimization. We point out that the traditional histogram equalization technique and its variants are fundamentally improper by suffering from local optimum problems. To overcome this drawback, we remodel the histogram equalization tone mapping task based on graphic theory which achieves the global optimal solutions. Another advantage of the graphic-based modeling is that the tone-continuity is also modeled as a vital constraint in our approach which suppress the annoying boundary artifacts of the traditional approaches. In addition, we propose a novel dynamic programming technique to solve the histogram equalization problem in real time. Experimental results shows that the proposed tone-preserved global optimal histogram equalization technique outperforms the traditional approaches by exhibiting more subtle details in the foreground while preserving the smoothness of the background.
Ji, Zhiwei; Wang, Bing
2014-01-01
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM) plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS) model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC. PMID:24745007
Luĉić, Felipe; Sánchez-Nieto, Beatriz; Caprile, Paola; Zelada, Gabriel; Goset, Karen
2013-01-01
Total skin electron irradiation (TSEI) has been used as a treatment for mycosis fungoides. Our center has implemented a modified Stanford technique with six pairs of 6 MeV adjacent electron beams, incident perpendicularly on the patient who remains lying on a translational platform, at 200 cm from the source. The purpose of this study is to perform a dosimetric characterization of this technique and to investigate its optimization in terms of energy characteristics, extension, and uniformity of the treatment field. In order to improve the homogeneity of the distribution, a custom-made polyester filter of variable thickness and a uniform PMMA degrader plate were used. It was found that the characteristics of a 9 MeV beam with an 8 mm thick degrader were similar to those of the 6 MeV beam without filter, but with an increased surface dose. The combination of the degrader and the polyester filter improved the uniformity of the distribution along the dual field (180cm long), increasing the dose at the borders of field by 43%. The optimum angles for the pair of beams were ± 27°. This configuration avoided displacement of the patient, and reduced the treatment time and the positioning problems related to the abutting superior and inferior fields. Dose distributions in the transversal plane were measured for the six incidences of the Stanford technique with film dosimetry in an anthropomorphic pelvic phantom. This was performed for the optimized treatment and compared with the previously implemented technique. The comparison showed an increased superficial dose and improved uniformity of the 85% isodose curve coverage for the optimized technique. PMID:24036877
Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2015-12-01
To design a robust swimmer tracking system, we took into account two well-known tracking techniques: the nonlinear joint transform correlation (NL-JTC) and the color histogram. The two techniques perform comparably well, yet they both have substantial limitations. Interestingly, they also seem to show some complementarity. The correlation technique yields accurate detection but is sensitive to rotation, scale and contour deformation, whereas the color histogram technique is robust for rotation and contour deformation but shows low accuracy and is highly sensitive to luminosity and confusing background colors. These observations suggested the possibility of a dynamic fusion of the correlation plane and the color scores map. Before this fusion, two steps are required. First is the extraction of a sub-plane of correlation that describes the similarity between the reference and target images. This sub-plane has the same size as the color scores map but they have different interval values. Thus, the second step is required which is the normalization of the planes in the same interval so they can be fused. In order to determine the benefits of this fusion technique, first, we tested it on a synthetic image containing different forms with different colors. We thus were able to optimize the correlation plane and color histogram techniques before applying our fusion technique to real videos of swimmers in international competitions. Last, a comparative study of the dynamic fusion technique and the two classical techniques was carried out to demonstrate the efficacy of the proposed technique. The criteria of comparison were the tracking percentage, the peak to correlation energy (PCE), which evaluated the sharpness of the peak (accuracy), and the local standard deviation (Local-STD), which assessed the noise in the planes (robustness).
Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques
Hernandez, V. Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.
2015-04-15
Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.
A soft self-repairing for FBG sensor network in SHM system based on PSO-SVR model reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Wang, Peng; Liang, Dakai; Fan, Chunfeng; Li, Cailing
2015-05-01
Structural health monitoring (SHM) system takes advantage of an array of sensors to continuously monitor a structure and provide an early prediction such as the damage position and damage degree etc. Such a system requires monitoring the structure in any conditions including bad condition. Therefore, it must be robust and survivable, even has the self-repairing ability. In this study, a model reconstruction predicting algorithm based on particle swarm optimization-support vector regression (PSO-SVR) is proposed to achieve the self-repairing of the Fiber Bragg Grating (FBG) sensor network in SHM system. Furthermore, an eight-point FBG sensor SHM system is experimented in an aircraft wing box. For the damage loading position prediction on the aircraft wing box, six kinds of disabled modes are experimentally studied to verify the self-repairing ability of the FBG sensor network in the SHM system, and the predicting performance are compared with non-reconstruction based on PSO-SVR model. The research results indicate that the model reconstruction algorithm has more excellence than that of non-reconstruction model, if partial sensors are invalid in the FBG-based SHM system, the predicting performance of the model reconstruction algorithm is almost consistent with that no sensor is invalid in the SHM system. In this way, the self-repairing ability of the FBG sensor is achieved for the SHM system, such the reliability and survivability of the FBG-based SHM system is enhanced if partial FBG sensors are invalid.
NASA Astrophysics Data System (ADS)
Liu, Hanli; Pei, Tao; Zhou, Chenghu; Zhu, A.-Xing
2008-12-01
In order to enhance the spectral characteristics of features for clustering, in the experiment of wetland extraction in Sanjiang Plain, we use a series of approaches in preprocessing of the MODIS remote sensing data by considering eliminating interference caused by other features. First, by analysis of the spectral characteristics of data, we choose a set of multi-temporal and multi-spectral MODIS data in Sanjiang Plain for clustering. By building and applying mask, the water areas and woodland vegetation can be eliminated from the image data. Second, by Enhanced Lee filtering and Minimum Noise Fraction (MNF) transformation, the data can be denoised and the characteristics of wetland can be enhanced obviously. After the preprocessing of data, the fuzzy c-means clustering algorithm optimized by particle swarm algorithm (PSO-FCM) is utilized on the image data for the wetland extraction. The result of experiment shows that the accuracy of wetland extraction by means of PSO-FCM algorithm is reasonable and effective.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
Wang, Shu-tao; Chen, Dong-ying; Wang, Xing-long; Wei, Meng; Wang, Zhi-fang
2015-12-01
In this paper, fluorescence spectra properties of potassium sorbate in aqueous solution and orange juice are studied, and the result.shows that in two solution there are many difference in fluorescence spectra of potassium sorbate, but the fluorescence characteristic peak exists in λ(ex)/λ(em) = 375/490 nm. It can be seen from the two dimensional fluorescence spectra that the relationship between the fluorescence intensity and the concentration of potassium sorbate is very complex, so there is no linear relationship between them. To determine the concentration of potassium sorbate in orange juice, a new method combining Particle Swarm Optimization (PSO) algorithm with Back Propagation (BP) neural network is proposed. The relative error of two predicted concentrations is 1.83% and 1.53% respectively, which indicate that the method is feasible. The PSO-BP neural network can accurately measure the concentration of potassium sorbate in orange juice in the range of 0.1-2.0 g · L⁻¹. PMID:26964248
Delahaye, P; Galatà, A; Angot, J; Cam, J F; Traykov, E; Ban, G; Celona, L; Choinski, J; Gmaj, P; Jardin, P; Koivisto, H; Kolhinen, V; Lamy, T; Maunoury, L; Patti, G; Thuillier, T; Tarvainen, O; Vondrasek, R; Wenander, F
2016-02-01
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here. PMID:26932063
NASA Astrophysics Data System (ADS)
Delahaye, P.; Galatà, A.; Angot, J.; Cam, J. F.; Traykov, E.; Ban, G.; Celona, L.; Choinski, J.; Gmaj, P.; Jardin, P.; Koivisto, H.; Kolhinen, V.; Lamy, T.; Maunoury, L.; Patti, G.; Thuillier, T.; Tarvainen, O.; Vondrasek, R.; Wenander, F.
2016-02-01
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here.
A Constrainted Design Approach for NLF Airfoils by Coupling Inverse Design and Optimal Techniques
NASA Astrophysics Data System (ADS)
Deng, L.; Gao, Y. W.; Qiao, Z. D.
2011-09-01
In present paper, a design method for natural laminar flow (NLF) airfoils with a substantial amount of natural laminar flow on both surfaces by coupling inverse design method and optimal technique is developed. The N-factor method is used to design the target pressure distributions before pressure recovery region with desired transition locations while maintaining aerodynamics constraints. The pressure in recovery region is designed according to Stratford separation criteria to prevent the laminar separation. In order to improve the off-design performance in inverse design, a multi-point inverse design is performed. An optimal technique based on response surface methodology (RSM) is used to calculate the target airfoil shapes according to the designed target pressure distributions. The set of design points is selected to satisfy the D-optimality and the reduced quadratic polynomial RS models without the 2nd-order cross items are constructed to reduce the computational cost. The design cases indicated that by the coupling-method developed in present paper, the inverse design method can be used in multi-point design to improve the off-design performance and the airfoils designed have the desired transition locations and maintain the aerodynamics constraints while the thickness constraint is difficult to meet in this design procedure.
NASA Technical Reports Server (NTRS)
Olds, John R.
1992-01-01
Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.
Fillinger, M F; Weaver, J B
1999-12-01
Because endovascular procedures represent an ever-increasing portion of many vascular surgery practices, many surgeons are faced with difficult choices. Endovascular procedures often require open surgery, and open surgical techniques increasingly require fluoroscopic imaging. Without good intraoperative imaging, endovascular procedures are difficult and endovascular aneurysm repair is impossible. How does one balance the need for optimal imaging without sacrificing the ability to safely perform open surgical procedures, especially in the early stages of a developing endovascular program? Strategies include the use of a portable c-arm and carbon fiber table in the operating room (OR), adding a fixed imaging platform to an OR, gaining access to an angiography suite that does not meet OR requirements, and modifying it into an interventional suite that does meet operating room standards. Once the optimal equipment and facilities have been chosen, other choices must be considered. Should a radiology technician be hired? Should an interventional radiologist be available to assist or be incorporated as a routine member of the team? How will typical operating room procedures and technique need to be altered in an effort to optimize intraoperative imaging for endovascular procedures? This article gives an overview of the many issues that arise as a vascular surgery practice evolves to incorporate complex endovascular procedures. PMID:10651460
Artificial intelligent techniques for optimizing water allocation in a reservoir watershed
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
2014-05-01
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
On large-scale nonlinear programming techniques for solving optimal control problems
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
Integration of ab-initio nuclear calculation with derivative free optimization technique
Sharda, Anurag
2008-01-01
Optimization techniques are finding their inroads into the field of nuclear physics calculations where the objective functions are very complex and computationally intensive. A vast space of parameters needs searching to obtain a good match between theoretical (computed) and experimental observables, such as energy levels and spectra. Manual calculation defies the scope of such complex calculation and are prone to error at the same time. This body of work attempts to formulate a design and implement it which would integrate the ab initio nuclear physics code MFDn and the VTDIRECT95 code. VTDIRECT95 is a Fortran95 suite of parallel code implementing the derivative-free optimization algorithm DIRECT. Proposed design is implemented for a serial and parallel version of the optimization technique. Experiment with the initial implementation of the design showing good matches for several single-nucleus cases are conducted. Determination and assignment of appropriate number of processors for parallel integration code is implemented to increase the efficiency and resource utilization in the case of multiple nuclei parameter search.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality. PMID:24293277
Optimization models and techniques for implementation and pricing of electricity markets
NASA Astrophysics Data System (ADS)
Madrigal Martinez, Marcelino
Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
An optimal technique for constraint-based image restoration and reconstruction
NASA Astrophysics Data System (ADS)
Leahy, Richard M.; Goutis, Costas E.
1986-12-01
A new technique for finding an optimal feasible solution to the general image reconstruction and restoration problem is described. This method allows the use of prior knowledge of the properties of both the solution and any noise present on the data. The problem is formulated as the optimization of a cost function over the intersection of a number of convex constraint sets; each set being defined as containing those solutions consistent with a particular constraint. A duality theorem is then applied to yield a dual problem in which the unknown image is replaced by a model defined in terms of a finite dimensional parameter vector and the kernels of the inteegral equations relating the data and solution. The dual problem may then be solved for the model parameters using a gradient descent algorithm. This method serves as an alternative to the primal constrained optimization and projection onto convex sets (POCS) algorithms. Problems in which this new approach is appropriate are discussed. An example is given for image reconstruction from noisy projection data; applying the dual method results in a fast nonlinear algorithm. Simulation results demonstrate the superiority of the optimal feasible solution over one obtained using a suboptimal approach.
Design and optimization of a total vaporization technique coupled to solid-phase microextraction.
Rainey, Christina L; Bors, Dana E; Goodpaster, John V
2014-11-18
Solid-phase microextraction (SPME) is a popular sampling technique in which chemical compounds are collected with a sorbent-coated fiber and then desorbed into an analytical instrument such as a liquid or gas chromatograph. Typically, this technique is used to sample the headspace above a solid or liquid sample (headspace SPME), or to directly sample a liquid (immersion SPME). However, this work demonstrates an alternative approach where the sample is totally vaporized (total vaporization SPME or TV-SPME) so that analytes partition directly between the vapor phase and the SPME fiber. The implementation of this technique is demonstrated with polydimethylsiloxane-divinylbenzene (PDMS-DVB) and polyacrylate (PA) coated SPME fibers for the collection of nicotine and its metabolite cotinine in chloroform extracts. The most important method parameters were optimized using a central composite design, and this resulted in an optimal extraction temperature (96 °C), extraction time (60 min), and sample volume (120 μL). In this application, large sample volumes up to 210 μL were analyzed using a volatile solvent such as chloroform at elevated temperatures. The sensitivity of TV-SPME is nearly twice that of liquid injection for cotinine and nearly 6 times higher for nicotine. In addition, increased sampling selectivity of TV-SPME permits detection of both nicotine and cotinine in hair as biomarkers of tobacco use where in the past the detection of cotinine has not been achieved by conventional SPME. PMID:25313649
Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink
Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376
NASA Astrophysics Data System (ADS)
Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.
2009-11-01
In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1991-01-01
Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
NASA Technical Reports Server (NTRS)
Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.
1990-01-01
A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.
The Analysis and Design of Low Boom Configurations Using CFD and Numerical Optimization Techniques
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1999-01-01
The use of computational fluid dynamics (CFD) for the analysis of sonic booms generated by aircraft has been shown to increase the accuracy and reliability of predictions. CFD takes into account important three-dimensional and nonlinear effects that are generally neglected by modified linear theory (MLT) methods. Up to the present time, CFD methods have been primarily used for analysis or prediction. Some investigators have used CFD to impact the design of low boom configurations using trial and error methods. One investigator developed a hybrid design method using a combination of Modified Linear Theory (e.g. F-functions) and CFD to provide equivalent area due to lift driven by a numerical optimizer to redesign or modify an existing configuration to achieve a shaped sonic boom signature. A three-dimensional design methodology has not yet been developed that completely uses nonlinear methods or CFD. Constrained numerical optimization techniques have existed for some time. Many of these methods use gradients to search for the minimum of a specified objective function subject to a variety of design variable bounds, linear and nonlinear constraints. Gradient based design optimization methods require the determination of the objective function gradients with respect to each of the design variables. These optimization methods are efficient and work well if the gradients can be obtained analytically. If analytical gradients are not available, the objective gradients or derivatives with respect to the design variables must be obtained numerically. To obtain numerical gradients, say, for 10 design variables, might require anywhere from 10 to 20 objective function evaluations. Typically, 5-10 global iterations of the optimizer are required to minimize the objective function. In terms of using CFD as a design optimization tool, the numerical evaluation of gradients can require anywhere from 100 to 200 CFD computations per design for only 10 design variables. If one CFD
A Model Optimization Approach to the Automatic Segmentation of Medical Images
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi
The aim of this work is to develop an efficient medical image segmentation technique by fitting a nonlinear shape model with pre-segmented images. In this technique, the kernel principle component analysis (KPCA) is used to capture the shape variations and to build the nonlinear shape model. The pre-segmentation is carried out by classifying the image pixels according to the high level texture features extracted using the over-complete wavelet packet decomposition. Additionally, the model fitting is completed using the particle swarm optimization technique (PSO) to adapt the model parameters. The proposed technique is fully automated, is talented to deal with complex shape variations, can efficiently optimize the model to fit the new cases, and is robust to noise and occlusion. In this paper, we demonstrate the proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans and the obtained results are very hopeful.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.
1992-01-01
The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.
Fractional order fuzzy control of hybrid power system with renewable generation using chaotic PSO.
Pan, Indranil; Das, Saptarshi
2016-05-01
This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants. PMID:25816968
Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model
NASA Astrophysics Data System (ADS)
Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.
2014-02-01
In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Design and optimization of stepped austempered ductile iron using characterization techniques
Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.
2013-09-15
Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.
A model based technique for the design of flight directors. [optimal control models
NASA Technical Reports Server (NTRS)
Levison, W. H.
1973-01-01
A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.
Techniques to reduce pain associated with hair transplantation: optimizing anesthesia and analgesia.
Nusbaum, Bernard P
2004-01-01
The importance of pain control in hair transplantation cannot be overemphasized. Adequate preoperative sedation to reduce anxiety, raise pain threshold, and induce amnesia is fundamental to minimizing operative pain. Most of the pain associated with the procedure results from injection of the local anesthetic. Once initial anesthesia is achieved, proper maintenance of anesthesia is of paramount importance especially with the trend toward larger numbers of grafts being performed in one session with prolonged operative times. The choice of local anesthetic agents, infiltration technique, optimal field blocks and nerve blocks, proper hemostasis, timely repetition of anesthesia, and use of analgesics intraoperatively, with the goal of maintaining the patient pain-free during the procedure, are fundamental. In addition, reduced pain on infiltration can be achieved with buffering and warming of the local anesthetic solution as well as techniques to decrease sensation or partially anesthetize the skin prior to injection. Techniques such as bupivacaine donor area field block in the immediate postoperative period and early administration of analgesics can greatly influence postoperative pain. Along with excellent cosmetic results attainable with modern techniques, improving patients' experiences during the surgical process will enhance the public perception of hair transplantation and will encourage prospective patients to seek this treatment modality. PMID:14979739
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Optimization of MKID noise performance via readout technique for astronomical applications
NASA Astrophysics Data System (ADS)
Czakon, Nicole G.; Schlaerth, James A.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas
2010-07-01
Detectors employing superconducting microwave kinetic inductance detectors (MKIDs) can be read out by measuring changes in either the resonator frequency or dissipation. We will discuss the pros and cons of both methods, in particular, the readout method strategies being explored for the Multiwavelength Sub/millimeter Inductance Camera (MUSIC) to be commissioned at the CSO in 2010. As predicted theoretically and observed experimentally, the frequency responsivity is larger than the dissipation responsivity, by a factor of 2-4 under typical conditions. In the absence of any other noise contributions, it should be easier to overcome amplifier noise by simply using frequency readout. The resonators, however, exhibit excess frequency noise which has been ascribed to a surface distribution of two-level fluctuators sensitive to specific device geometries and fabrication techniques. Impressive dark noise performance has been achieved using modified resonator geometries employing interdigitated capacitors (IDCs). To date, our noise measurement and modeling efforts have assumed an onresonance readout, with the carrier power set well below the nonlinear regime. Several experimental indicators suggested to us that the optimal readout technique may in fact require a higher readout power, with the carrier tuned somewhat off resonance, and that a careful systematic study of the optimal readout conditions was needed. We will present the results of such a study, and discuss the optimum readout conditions as well as the performance that can be achieved relative to BLIP.
Reducing the impact of a desalination plant using stochastic modeling and optimization techniques
NASA Astrophysics Data System (ADS)
Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François
2009-02-01
SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
Si, Lei; Wang, Zhongbin; Yang, Yinwei
2014-01-01
In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358
NASA Astrophysics Data System (ADS)
Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.
2012-04-01
Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN
Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih
2016-01-01
either DCA or IMRS plans, at 9.2 ± 7% and 8.2 ± 6%, respectively. Owing to the multiple arc or beam planning designs of IMRS and VMAT, both of these techniques required higher MU delivery than DCA, with the averages being twice as high (p < 0.05). If linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem. PMID:27396940
NASA Technical Reports Server (NTRS)
MacKay, Rebecca A.; Locci, Ivan E.; Garg, anita; Ritzert, Frank J.
2002-01-01
is a three-phase constituent composed of TCP and stringers of gamma phase in a matrix of gamma prime. An incoherent grain boundary separates the SRZ from the gammagamma prime microstructure of the superalloy. The SRZ is believed to form as a result of local chemistry changes in the superalloy due to the application of the diffusion aluminide bondcoat. Locally high surface stresses also appear to promote the formation of the SRZ. Thus, techniques that change the local alloy chemistry or reduce surface stresses have been examined for their effectiveness in reducing SRZ. These SRZ-reduction steps are performed on the test specimen or the turbine blade before the bondcoat is applied. Stressrelief heat treatments developed at NASA Glenn have been demonstrated to reduce significantly the amount of SRZ that develops during subsequent high-temperature exposures. Stress-relief heat treatments reduce surface stresses by recrystallizing a thin surface layer of the superalloy. However, in alloys with very high propensities to form SRZ, stress relief heat treatments alone do not eliminate SRZ entirely. Thus, techniques that modify the local chemistry under the bondcoat have been emphasized and optimized successfully at Glenn. One such technique is carburization, which changes the local chemistry by forming submicron carbides near the surface of the superalloy. Detailed characterizations have demonstrated that the depth and uniform distribution of these carbides are enhanced when a stress relief treatment and an appropriate surface preparation are employed in advance of the carburization treatment. Even in alloys that have the propensity to develop a continuous SRZ layer beneath the diffusion zone, the SRZ has been completely eliminated or reduced to low, manageable levels when this combination of techniques is utilized. Now that the techniques to mitigate SRZ have been established at Glenn, TCP phase formation is being emphasized in ongoing work under the UEET Program. The
Good techniques optimize control of oil-based mud and solids
Phelps, J.; Hoopingarner, J.
1989-02-13
Effective techniques have been developed from work on dozens of North Sea Wells to minimize the amount of oil-based mud discharged to the sea while maintaining acceptable levels of solids. Pressure to reduce pollution during the course of drilling prompted the development of these techniques. They involve personnel and optimization of mud system and procedures. Case histories demonstrate that regulations may be met with economical techniques using existing technology. The benefits of low solids content are widely known, and are a key part of any successful mud program. Good solids control should result in lower mud costs and better drilling performance. Operators have specified high-performance shakers to accomplish this and have revised their mud programs with lower and lower allowable drilled solids percentages. This will pay off in certain areas. But with the U.K. Department of Energy regulations requiring cuttings oil discharge content (CODC) to be less than 150 g of oil/kg of dry solids discharge that went into effect Jan. 1, 1989, oil-loss control has a higher profile in the U.K. sector of the North Sea.
SNEV(P) (rp19/) (PSO) (4) deficiency increases PUVA-induced senescence in mouse skin.
Monteforte, Rossella; Beilhack, Georg F; Grausenburger, Reinhard; Mayerhofer, Benjamin; Bittner, Reginald; Grillari-Voglauer, Regina; Sibilia, Maria; Dellago, Hanna; Tschachler, Erwin; Gruber, Florian; Grillari, Johannes
2016-03-01
Senescent cells accumulate during ageing in various tissues and contribute to organismal ageing. However, factors that are involved in the induction of senescence in vivo are still not well understood. SNEV(P) (rp19/) (PSO) (4) is a multifaceted protein, known to be involved in DNA damage repair and senescence, albeit only in vitro. In this study, we used heterozygous SNEV(+/-) mice (SNEV-knockout results in early embryonic lethality) and wild-type littermate controls as a model to elucidate the role of SNEV(P) (rp19/) (PSO) (4) in DNA damage repair and senescence in vivo. We performed PUVA treatment as model system for potently inducing cellular senescence, consisting of 8-methoxypsoralen in combination with UVA on mouse skin to induce DNA damage and premature skin ageing. We show that SNEV(P) (rp19/) (PSO) (4) expression decreases during organismal ageing, while p16, a marker of ageing in vivo, increases. In response to PUVA treatment, we observed in the skin of both SNEV(P) (rp19/) (PSO) (4) and wild-type mice an increase in γ-H2AX levels, a DNA damage marker. In old SNEV(P) (rp19/) (PSO) (4) mice, this increase is accompanied by reduced epidermis thickening and increase in p16 and collagenase levels. Thus, the DNA damage response occurring in the mouse skin upon PUVA treatment is dependent on SNEV(P) (rp19/) (PSO) (4) expression and lower levels of SNEV(P) (rp19/) (PSO) (4) , as in old SNEV(+/-) mice, result in increase in cellular senescence and acceleration of premature skin ageing. PMID:26663487
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a
Recursive Ant Colony Global Optimization: a new technique for the inversion of geophysical data
NASA Astrophysics Data System (ADS)
Gupta, D. K.; Gupta, J. P.; Arora, Y.; Singh, U. K.
2011-12-01
We present a new method called Recursive Ant Colony Global Optimization (RACO) technique, a modified form of general ACO, which can be used to find the best solutions to inversion problems in geophysics. RACO simulates the social behaviour of ants to find the best path between the nest and the food source. A new term depth has been introduced, which controls the extent of recursion. A selective number of cities get qualified for the successive depth. The results of one depth are used to construct the models for the next depth and the range of values for each of the parameters is reduced without any change to the number of models. The three additional steps performed after each depth, are the pheromone tracking, pheromone updating and city selection. One of the advantages of RACO over ACO is that if a problem has multiple solutions, then pheromone accumulation will take place at more than one city thereby leading to formation of multiple nested ACO loops within the ACO loop of the previous depth. Also, while the convergence of ACO is almost linear, RACO shows exponential convergence and hence is faster than the ACO. RACO proves better over some other global optimization techniques, as it does not require any initial values to be assigned to the parameters function. The method has been tested on some mathematical functions, synthetic self-potential (SP) and synthetic gravity data. The obtained results reveal the efficiency and practicability of the method. The method is found to be efficient enough to solve the problems of SP and gravity anomalies due to a horizontal cylinder, a sphere, an inclined sheet and multiple idealized bodies buried inside the earth. These anomalies with and without noise were inverted using the RACO algorithm. The obtained results were compared with those obtained from the conventional methods and it was found that RACO results are more accurate. Finally this optimization technique was applied to real field data collected over the Surda
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising
NASA Astrophysics Data System (ADS)
Yang, Y.; Wu, J.
2011-12-01
The previous work in the field of multi-objective optimization under uncertainty has concerned with the probabilistic multi-objective algorithm itself, how to effectively evaluate an estimate of uncertain objectives and identify a set of reliable Pareto optimal solutions. However, the design of a robust and reliable groundwater remediation system encounters major difficulties owing to the inherent uncertainty of hydrogeological parameters such as hydraulic conductivity (K). Thus, we need to make reduction of uncertainty associated with the site characteristics of the contaminated aquifers. In this study, we first use the Sequential Gaussian Simulation (SGSIM) to generate 1000 conditional realizations of lnK based on the sampled conditioning data acquired by field test. It is worthwhile to note that the cost for field test often weighs heavily upon the remediation cost and must thus be taken into account in the tradeoff between the solution reliability and remedial cost optimality. In this situation, we perform Monte Carlo simulation to make an uncertainty analysis of lnK realizations associated with the different number of conditioning data points. The results indicate that the uncertainty of the site characteristics and the contaminant concentration output from transport model is decreasing and then tends toward stabilization with the increase of conditioning data. This study presents a probabilistic multi-objective evolutionary algorithm (PMOEA) that integrates noisy genetic algorithm (NGA) and probabilistic multi-objective genetic algorithm (MOGA). The evident difference between deterministic MOGA and probabilistic MOGA is the use of probabilistic Pareto domination ranking and niche technique to ensure that each solution found is most reliable and robust. The proposed algorithm is then evaluated through a synthetic pump-and-treat (PAT) groundwater remediation test case. The 1000 lnK realizations generated by SGSIM with appropriate number of conditioning data (30
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
NASA Astrophysics Data System (ADS)
Wu, Z.; Gao, Y.; Gong, H.; Li, L.
2016-04-01
Lacking of efficient methods, industry currently uses one only parameter—fuel flow rate—to evaluate the nozzle quality, which is far from satisfying the current emission regulations worldwide. By utilizing synchrotron radiation high energy X-ray in Shanghai Synchrotron Radiation Facility (SSRF), together with the imaging techniques, the 3D models of two nozzles with the same design dimensions were established, and the influence of parameters fluctuation in the azimuthal direction were analyzed in detail. Results indicate that, due to the orifice misalignment, even with the same design dimension, the inlet rounding radius of orifices differs greatly, and its fluctuation in azimuthal direction is also large. This difference will cause variation in the flow characteristics at orifice outlet and then further affect the spray characteristics. The study also indicates that, more precise investigation and insight into the evaluation and optimization of diesel nozzle structural parameter are needed.
Optimization of a wood dryer kiln using the mixed integer programming technique: A case study
Gustafsson, S.I.
1999-07-01
When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.
Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Yoo, Seung-Yeun
2011-01-01
Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.
Lucero, V.; Meale, B.M.; Purser, F.E.
1990-01-01
The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2016-04-21
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
NASA Astrophysics Data System (ADS)
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi
2016-04-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.
NASA Astrophysics Data System (ADS)
Galanis, George; Famelis, Ioannis; Kalogeri, Christina
2014-10-01
The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.
Application of Optimization Techniques to Design of Unconventional Rocket Nozzle Configurations
NASA Technical Reports Server (NTRS)
Follett, W.; Ketchum, A.; Darian, A.; Hsu, Y.
1996-01-01
Several current rocket engine concepts such as the bell-annular tri-propellant engine, and the linear aerospike being proposed for the X-33 require unconventional three dimensional rocket nozzles which must conform to rectangular or sector shaped envelopes to meet integration constraints. These types of nozzles exist outside the current experience database, therefore, the application of efficient design methods for these propulsion concepts is critical to the success of launch vehicle programs. The objective of this work is to optimize several different nozzle configurations, including two- and three-dimensional geometries. Methodology includes coupling computational fluid dynamic (CFD) analysis to genetic algorithms and Taguchi methods as well as implementation of a streamline tracing technique. Results of applications are shown for several geometeries including: three dimensional thruster nozzles with round or super elliptic throats and rectangualar exits, two- and three-dimensional thrusters installed within a bell nozzle, and three dimensional thrusters with round throats and sector shaped exits. Due to the novel designs considered for this study, there is little experience which can be used to guide the effort and limit the design space. With a nearly infinite parameter space to explore, simple parametric design studies cannot possibly search the entire design space within the time frame required to impact the design cycle. For this reason, robust and efficient optimization methods are required to explore and exploit the design space to achieve high performance engine designs. Five case studies which examine the application of various techniques in the engineering environment are presented in this paper.
Chandran, Sajeev; Ravi, Punnarao; Saha, Ranendra N
2006-07-01
The objective of this study was to develop controlled release matrix embedded formulations of celecoxib (CCX) as candidate drug using hydroxy propyl methyl cellulose (HPMC) and ethyl cellulose (EC), either alone or in combination, using optimization techniques like polynomial method and composite design. This would enable development of controlled release formulations with predictable and better release characteristics in lesser number of trials. Controlled release matrix tablets of CCX were prepared by wet granulation method. The in vitro release rate studies were carried out in USP dissolution apparatus (paddle method) in 900 ml of sodium phosphate buffer (pH 7.4) with 1% v/v tween-80. The in vitro drug release data was suitably transformed and used to develop mathematical models using first order polynomial equation and composite design techniques of optimization. In the formulations prepared using HPMC alone, the release rate decreased as the polymer proportion in the matrix base was increased. Whereas in case of formulations prepared using EC alone, only marginal difference was observed in the release rate upon increasing the polymer proportion. In case of formulations containing combination of HPMC and EC, the release of the drug was found to be dependent on the relative proportions of HPMC and EC used in the tablet matrix. The release of the drug from these formulations was extended up to 21 h indicating they can serve as once daily controlled release formulations for CCX. Mathematical analysis of the release kinetics indicates a near approximate Fickian release character for most of the designed formulations. Mathematical equation developed by transforming the in vitro release data using composite design model showed better correlation between observed and predicted t(50%) (time required for 50% of the drug release) when compared to first order polynomial equation model. The equation thus developed can be used to predict the release characteristics of the
Rakheja, S; Gurram, R; Gouw, G J
1993-10-01
Hand-arm vibration (HAV) models serve as an effective tool to assess the vibration characteristics of the hand-tool system and to evaluate the attenuation performance of vibration isolation mechanisms. This paper describes a methodology to identify the parameters of HAV models, whether linear or nonlinear, using mechanical impedance data and a nonlinear programming based optimization technique. Three- and four-degrees-of-freedom (DOF) linear, piecewise linear and nonlinear HAV models are formulated and analyzed to yield impedance characteristics in the 5-1000 Hz frequency range. A local equivalent linearization algorithm, based upon the principle of energy similarity, is implemented to simulate the nonlinear HAV models. Optimization methods are employed to identify the model parameters, such that the magnitude and phase errors between the computed and measured impedance characteristics are minimum in the entire frequency range. The effectiveness of the proposed method is demonstrated through derivations of models that correlate with the measured X-axis impedance characteristics of the hand-arm system, proposed by ISO. The results of the study show that a linear model cannot predict the impedance characteristics in the entire frequency range, while a piecewise linear model yields an accurate estimation. PMID:8253830
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1988-01-01
Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general-purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used. The use of optimization in this activity allows investigation of numerous options (such as number of bays, material, minimum diagonal wall thicknesses) in a relatively short time. The procedure provides data for judgments on the effects of different options on the design.
Aquino, P L M; Fonseca, F S; Mozzer, O D; Giordano, R C; Sousa, R
2016-07-01
Clostridium novyi causes necrotic hepatitis in sheep and cattle, as well as gas gangrene. The microorganism is strictly anaerobic, fastidious, and difficult to cultivate in industrial scale. C. novyi type B produces alpha and beta toxins, with the alpha toxin being linked to the presence of specific bacteriophages. The main strategy to combat diseases caused by C. novyi is vaccination, employing vaccines produced with toxoids or with toxoids and bacterins. In order to identify culture medium components and concentrations that maximized cell density and alpha toxin production, a neuro-fuzzy algorithm was applied to predict the yields of the fermentation process for production of C. novyi type B, within a global search procedure using the simulated annealing technique. Maximizing cell density and toxin production is a multi-objective optimization problem and could be treated by a Pareto approach. Nevertheless, the approach chosen here was a step-by-step one. The optimum values obtained with this approach were validated in laboratory scale, and the results were used to reload the data matrix for re-parameterization of the neuro-fuzzy model, which was implemented for a final optimization step with regards to the alpha toxin productivity. With this methodology, a threefold increase of alpha toxin could be achieved. PMID:27003282
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
Improved Particle Swarm Optimization for Global Optimization of Unimodal and Multimodal Functions
NASA Astrophysics Data System (ADS)
Basu, Mousumi
2015-07-01
Particle swarm optimization (PSO) performs well for small dimensional and less complicated problems but fails to locate global minima for complex multi-minima functions. This paper proposes an improved particle swarm optimization (IPSO) which introduces Gaussian random variables in velocity term. This improves search efficiency and guarantees a high probability of obtaining the global optimum without significantly impairing the speed of convergence and the simplicity of the structure of particle swarm optimization. The algorithm is experimentally validated on 17 benchmark functions and the results demonstrate good performance of the IPSO in solving unimodal and multimodal problems. Its high performance is verified by comparing with two popular PSO variants.
Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry
2008-11-01
Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.
Particle swarm optimization with scale-free interactions.
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
Particle Swarm Optimization with Scale-Free Interactions
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.
2015-01-01
It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248
Improving the performance of mass-consistent numerical models using optimization techniques
Barnard, J.C.; Wegley, H.L.; Hiester, T.R.
1985-09-01
This report describes a technique of using a mass-consistent model to derive wind speeds over a microscale region of complex terrain. A serious limitation in the use of these numerical models is that the calculated wind field is highly sensitive to some input parameters, such as those specifying atmospheric stability. Because accurate values for these parameters are not usually known, confidence in the calculated winds is low. However, values for these parameters can be found by tuning the model to existing wind observations within a microscale area. This tuning is accomplished by using a single-variable, unconstrained optimization procedure that adjusts the unknown parameters so that the error between the observed winds and model calculations of these winds is minimized. Model verification is accomplished by using eight sets of hourly averaged wind data. These data are obtained from measurements made at approximately 30 sites covering a wind farm development in the Altamont Pass area. When the model is tuned to a small subset of the 30 sites, an accurate determination of the wind speeds was made for the remaining sites in six of the eight cases. (The two that failed were low wind speed cases.) Therefore, when this technique is used, numerical modeling shows great promise as a tool for microscale siting of wind turbines in complex terrain.
Tran, Cuong D; Gopalsamy, Geetha L; Mortimer, Elissa K; Young, Graeme P
2015-06-01
It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248
Gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2015-05-01
Gravity inversion is a classical tool in applied geophysics that corresponds, both, to a linear (density unknown) or nonlinear (geometry unknown) inverse problem depending on the model parameters. Inversion of basement relief of sedimentary basins is an important application among the nonlinear techniques. A common way to approach this problem consists in discretizing the basin using polygons (or other geometries), and iteratively solving the nonlinear inverse problem by local optimization. Nevertheless, this kind of approach is highly dependent of the prior information that is used and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we present the application of a full family Particle Swarm Optimizers (PSO) to the 2D gravity inversion and model appraisal (uncertainty assessment) of basement relief in sedimentary basins. The application of these algorithms to synthetic and real cases (a gravimetric profile from Atacama Desert in north Chile) shows that it is possible to perform a fast inversion and uncertainty assessment of the gravimetric model using a sampling while optimizing procedure. Besides, the parameters of these exploratory PSO optimizers are automatically tuned and selected based on stability criteria. We also show that the result is robust to the presence of noise in data. The fact that these algorithms do not require large computational resources makes them very attractive to solve this kind of gravity inversion problems.
NASA Astrophysics Data System (ADS)
Palma, Giuseppe; Bia, Pietro; Mescia, Luciano; Yano, Tetsuji; Nazabal, Virginie; Taguchi, Jun; Moréac, Alain; Prudenzano, Francesco
2014-07-01
A mid-IR amplifier consisting of a tapered chalcogenide fiber coupled to an Er-doped chalcogenide microsphere has been optimized via a particle swarm optimization (PSO) approach. More precisely, a dedicated three-dimensional numerical model, based on the coupled mode theory and solving the rate equations, has been integrated with the PSO procedure. The rate equations have included the main transitions among the erbium energy levels, the amplified spontaneous emission, and the most important secondary transitions pertaining to the ion-ion interactions. The PSO has allowed the optimal choice of the microsphere and fiber radius, taper angle, and fiber-microsphere gap in order to maximize the amplifier gain. The taper angle and the fiber-microsphere gap have been optimized to efficiently inject into the microsphere both the pump and the signal beams and to improve their spatial overlapping with the rare-earth-doped region. The employment of the PSO approach shows different attractive features, especially when many parameters have to be optimized. The numerical results demonstrate the effectiveness of the proposed approach for the design of amplifying systems. The PSO-based optimization approach has allowed the design of a microsphere-based amplifying system more efficient than a similar device designed by using a deterministic optimization method. In fact, the amplifier designed via the PSO exhibits a simulated gain G=33.7 dB, which is higher than the gain G=6.9 dB of the amplifier designed via the deterministic method.
3He lung morphometry technique: Accuracy analysis and pulse sequence optimization
NASA Astrophysics Data System (ADS)
Sukstanskii, A. L.; Conradi, M. S.; Yablonskiy, D. A.
2010-12-01
The 3He lung morphometry technique (Yablonskiy et al., JAP, 2009), based on MRI measurements of hyperpolarized gas diffusion in lung airspaces, provides unique information on the lung microstructure at the alveolar level. 3D tomographic images of standard morphological parameters (mean airspace chord length, lung parenchyma surface-to-volume ratio, and the number of alveoli per unit lung volume) can be created from a rather short (several seconds) MRI scan. These parameters are most commonly used to characterize lung morphometry but were not previously available from in vivo studies. A background of the 3He lung morphometry technique is based on a previously proposed model of lung acinar airways, treated as cylindrical passages of external radius R covered by alveolar sleeves of depth h, and on a theory of gas diffusion in these airways. The initial works approximated the acinar airways as very long cylinders, all with the same R and h. The present work aims at analyzing effects of realistic acinar airway structures, incorporating airway branching, physiological airway lengths, a physiological ratio of airway ducts and sacs, and distributions of R and h. By means of Monte-Carlo computer simulations, we demonstrate that our technique allows rather accurate measurements of geometrical and morphological parameters of acinar airways. In particular, the accuracy of determining one of the most important physiological parameter of lung parenchyma - surface-to-volume ratio - does not exceed several percent. Second, we analyze the effect of the susceptibility induced inhomogeneous magnetic field on the parameter estimate and demonstrate that this effect is rather negligible at B0 ⩽ 3T and becomes substantial only at higher B0 Third, we theoretically derive an optimal choice of MR pulse sequence parameters, which should be used to acquire a series of diffusion-attenuated MR signals, allowing a substantial decrease in the acquisition time and improvement in accuracy of the
El-Mohri, Youcef; Antonuk, Larry E.; Choroszucha, Richard B.; Zhao, Qihua; Jiang, Hao; Liu, Langechuan
2014-01-01
Thick, segmented crystalline scintillators have shown increasing promise as replacement x-ray converters for the phosphor screens currently used in active matrix flat-panel imagers (AMFPIs) in radiotherapy, by virtue of providing over an order of magnitude improvement in the DQE. However, element-to-element misalignment in current segmented scintillator prototypes creates a challenge for optimal registration with underlying AMFPI arrays, resulting in degradation of spatial resolution. To overcome this challenge, a methodology involving the use of a relatively high resolution AMFPI array in combination with novel binning techniques is presented. The array, which has a pixel pitch of 0.127 mm, was coupled to prototype segmented scintillators based on BGO, LYSO and CsI:Tl materials, each having a nominal element-to-element pitch of 1.016 mm and thickness of ~1 cm. The AMFPI systems incorporating these prototypes were characterized at a radiotherapy energy of 6 MV in terms of MTF, NPS, DQE, and reconstructed images of a resolution phantom acquired using a cone-beam CT geometry. For each prototype, the application of 8×8 pixel binning to achieve a sampling pitch of 1.016 mm was optimized through use of an alignment metric which minimized misregistration and thereby improved spatial resolution. In addition, the application of alternative binning techniques that exclude the collection of signal near septal walls resulted in further significant improvement in spatial resolution for the BGO and LYSO prototypes, though not for the CsI:Tl prototype due to the large amount of optical cross-talk resulting from significant light spread between scintillator elements in that device. The efficacy of these techniques for improving spatial resolution appears to be enhanced for scintillator materials that exhibit mechanical hardness, high density and high refractive index, such as BGO. Moreover, materials that exhibit these properties as well as offer significantly higher light
NASA Astrophysics Data System (ADS)
El-Mohri, Youcef; Antonuk, Larry E.; Choroszucha, Richard B.; Zhao, Qihua; Jiang, Hao; Liu, Langechuan
2014-02-01
Thick, segmented crystalline scintillators have shown increasing promise as replacement x-ray converters for the phosphor screens currently used in active matrix flat-panel imagers (AMFPIs) in radiotherapy, by virtue of providing over an order of magnitude improvement in the detective quantum efficiency (DQE). However, element-to-element misalignment in current segmented scintillator prototypes creates a challenge for optimal registration with underlying AMFPI arrays, resulting in degradation of spatial resolution. To overcome this challenge, a methodology involving the use of a relatively high resolution AMFPI array in combination with novel binning techniques is presented. The array, which has a pixel pitch of 0.127 mm, was coupled to prototype segmented scintillators based on BGO, LYSO and CsI:Tl materials, each having a nominal element-to-element pitch of 1.016 mm and thickness of ∼1 cm. The AMFPI systems incorporating these prototypes were characterized at a radiotherapy energy of 6 MV in terms of modulation transfer function, noise power spectrum, DQE, and reconstructed images of a resolution phantom acquired using a cone-beam CT geometry. For each prototype, the application of 8 × 8 pixel binning to achieve a sampling pitch of 1.016 mm was optimized through use of an alignment metric which minimized misregistration and thereby improved spatial resolution. In addition, the application of alternative binning techniques that exclude the collection of signal near septal walls resulted in further significant improvement in spatial resolution for the BGO and LYSO prototypes, though not for the CsI:Tl prototype due to the large amount of optical cross-talk resulting from significant light spread between scintillator elements in that device. The efficacy of these techniques for improving spatial resolution appears to be enhanced for scintillator materials that exhibit mechanical hardness, high density and high refractive index, such as BGO. Moreover, materials
El-Mohri, Youcef; Antonuk, Larry E; Choroszucha, Richard B; Zhao, Qihua; Jiang, Hao; Liu, Langechuan
2014-02-21
Thick, segmented crystalline scintillators have shown increasing promise as replacement x-ray converters for the phosphor screens currently used in active matrix flat-panel imagers (AMFPIs) in radiotherapy, by virtue of providing over an order of magnitude improvement in the detective quantum efficiency (DQE). However, element-to-element misalignment in current segmented scintillator prototypes creates a challenge for optimal registration with underlying AMFPI arrays, resulting in degradation of spatial resolution. To overcome this challenge, a methodology involving the use of a relatively high resolution AMFPI array in combination with novel binning techniques is presented. The array, which has a pixel pitch of 0.127 mm, was coupled to prototype segmented scintillators based on BGO, LYSO and CsI:Tl materials, each having a nominal element-to-element pitch of 1.016 mm and thickness of ∼ 1 cm. The AMFPI systems incorporating these prototypes were characterized at a radiotherapy energy of 6 MV in terms of modulation transfer function, noise power spectrum, DQE, and reconstructed images of a resolution phantom acquired using a cone-beam CT geometry. For each prototype, the application of 8 × 8 pixel binning to achieve a sampling pitch of 1.016 mm was optimized through use of an alignment metric which minimized misregistration and thereby improved spatial resolution. In addition, the application of alternative binning techniques that exclude the collection of signal near septal walls resulted in further significant improvement in spatial resolution for the BGO and LYSO prototypes, though not for the CsI:Tl prototype due to the large amount of optical cross-talk resulting from significant light spread between scintillator elements in that device. The efficacy of these techniques for improving spatial resolution appears to be enhanced for scintillator materials that exhibit mechanical hardness, high density and high refractive index, such as BGO. Moreover, materials
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Anatomy-based transmission factors for technique optimization in portable chest x-ray
NASA Astrophysics Data System (ADS)
Liptak, Christopher L.; Tovey, Deborah; Segars, William P.; Dong, Frank D.; Li, Xiang
2015-03-01
Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.
Optimized fractional cloudiness determination from five ground-based remote sensing techniques
Boers, R.; de Haij, M. J.; Wauben, W.M.F.; Baltink, Henk K.; van Ulft, L. H.; Savenije, M.; Long, Charles N.
2010-12-23
A one-year record of fractional cloudiness at 10 minute intervals was generated for the Cabauw Experimental Site for Atmospheric Research [CESAR] (51°58’N, 4° 55’E) using an integrated assessment of five different observational methods. The five methods are based on active as well as passive systems and use either a hemispheric or column remote sensing technique. The one-year instrumental cloudiness data were compared against a 30 year climatology of Observer data in the vicinity of CESAR [1971- 2000]. In the intermediate 2 - 6 octa range, most instruments, but especially the column methods, report lower frequency of occurrence of cloudiness than the absolute minimum values from the 30 year Observer climatology. At night, the Observer records less clouds in the 1, 2 octa range than during the day, while the instruments registered more clouds. During daytime the Observer also records much more 7 octa cloudiness than the instruments. One column method combining a radar with a lidar outstrips all other techniques in recording cloudiness, even up to height in excess of 9 km. This is mostly due to the high sensitivity of the radar that is used in the technique. A reference algorithm was designed to derive a continuous and optimized record of fractional cloudiness. Output from individual instruments were weighted according to the cloud base height reported at the observation time; the larger the height, the lower the weight. The algorithm was able to provide fractional cloudiness observations every 10 minutes for 98% of the total period of 12 months [15 May 2008 - 14 May 2009].
Caproni, A.; Toffoli, R. T.; Monteiro, H.; Abraham, Z.; Teixeira, D. M.
2011-07-20
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N{sub s} elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e.g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications. PMID:25059256
Shah, Chirag; Vicini, Frank A.
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W
2016-01-01
Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. PMID:25919417
Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.
1997-01-01
An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.
Tuomas, V.; Jaakko, L.
2013-07-01
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)
Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang
2009-01-01
Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The study demonstrated a
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge. PMID:24587746
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
Order-2 Stability Analysis of Particle Swarm Optimization.
Liu, Qunfeng
2015-01-01
Several stability analyses and stable regions of particle swarm optimization (PSO) have been proposed before. The assumption of stagnation and different definitions of stability are adopted in these analyses. In this paper, the order-2 stability of PSO is analyzed based on a weak stagnation assumption. A new definition of stability is proposed and an order-2 stable region is obtained. Several existing stable analyses for canonical PSO are compared, especially their definitions of stability and the corresponding stable regions. It is shown that the classical stagnation assumption is too strict and not necessary. Moreover, among all these definitions of stability, it is shown that our definition requires the weakest conditions, and additional conditions bring no benefit. Finally, numerical experiments are reported to show that the obtained stable region is meaningful. A new parameter combination of PSO is also shown to be good, even better than some known best parameter combinations. PMID:24738856
Hedegaard, R.F.; Ho, J.; Eisert, J.
1996-12-31
Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.
NASA Astrophysics Data System (ADS)
Tsampas, P.; Roditis, G.; Papadimitriou, V.; Chatzakos, P.; Gan, Tat-Hean
2013-05-01
Increasing demand in mobile, autonomous devices has made energy harvesting a particular point of interest. Systems that can be powered up by a few hundreds of microwatts could feature their own energy extraction module. Energy can be harvested from the environment close to the device. Particularly, the ambient mechanical vibrations conversion via piezoelectric transducers is one of the most investigated fields for energy harvesting. A technique for optimized energy harvesting using piezoelectric actuators called "Synchronized Switching Harvesting" is explored. Comparing to a typical full bridge rectifier, the proposed harvesting technique can highly improve harvesting efficiency, even in a significantly extended frequency window around the piezoelectric actuator's resonance. In this paper, the concept of design, theoretical analysis, modeling, implementation and experimental results using CEDRAT's APA 400M-MD piezoelectric actuator are presented in detail. Moreover, we suggest design guidelines for optimum selection of the storage unit in direct relation to the characteristics of the random vibrations. From a practical aspect, the harvesting unit is based on dedicated electronics that continuously sense the charge level of the actuator's piezoelectric element. When the charge is sensed, to come to a maximum, it is directed to speedily flow into a storage unit. Special care is taken so that electronics operate at low voltages consuming a very small amount of the energy stored. The final prototype developed includes the harvesting circuit implemented with miniaturized, low cost and low consumption electronics and a storage unit consisting of a super capacitors array, forming a truly self-powered system drawing energy from ambient random vibrations of a wide range of characteristics.
Parameter tuning of PVD process based on artificial intelligence technique
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In this study, an artificial intelligence technique is proposed to be implemented in the parameter tuning of a PVD process. Due to its previous adaptation in similar optimization problems, genetic algorithm (GA) is selected to optimize the parameter tuning of the RF magnetron sputtering process. The most optimized parameter combination obtained from GA's optimization result is expected to produce the desirable zinc oxide (ZnO) thin film from the sputtering process. The parameters involved in this study were RF power, deposition time and substrate temperature. The algorithm was tested to optimize the 25 datasets of parameter combinations. The results from the computational experiment were then compared with the actual result from the laboratory experiment. Based on the comparison, GA had shown that the algorithm was reliable to optimize the parameter combination before the parameter tuning could be done to the RF magnetron sputtering machine. In order to verify the result of GA, the algorithm was also been compared to other well known optimization algorithms, which were, particle swarm optimization (PSO) and gravitational search algorithm (GSA). The results had shown that GA was reliable in solving this RF magnetron sputtering process parameter tuning problem. GA had shown better accuracy in the optimization based on the fitness evaluation.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Ćujić, Nada; Šavikin, Katarina; Janković, Teodora; Pljevljakušić, Dejan; Zdunić, Gordana; Ibrić, Svetlana
2016-03-01
Traditional maceration method was used for the extraction of polyphenols from chokeberry (Aronia melanocarpa) dried fruit, and the effects of several extraction parameters on the total phenolics and anthocyanins contents were studied. Various solvents, particle size, solid-solvent ratio and extraction time have been investigated as independent variables in two level factorial design. Among examined variables, time was not statistically important factor for the extraction of polyphenols. The optimal extraction conditions were maceration of 0.75mm size berries by 50% ethanol, with solid-solvent ratio of 1:20, and predicted values were 27.7mgGAE/g for total phenolics and 0.27% for total anthocyanins. Under selected conditions, the experimental total phenolics were 27.8mgGAE/g, and total anthocyanins were 0.27%, which is in agreement with the predicted values. In addition, a complementary quantitative analysis of individual phenolic compounds was performed using HPLC method. The study indicated that maceration was effective and simple technique for the extraction of bioactive compounds from chokeberry fruit. PMID:26471536
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1989-01-01
Optimization procedures are developed to systematically provide closely-spaced vibration frequencies. A general purpose finite-element program for eigenvalue and sensitivity analyses is combined with formal mathematical programming techniques. Results are presented for three studies. The first study uses a simple model to obtain a design with two pairs of closely-spaced frequencies. Two formulations are developed: an objective function-based formulation and constraint-based formulation for the frequency spacing. It is found that conflicting goals are handled better by a constraint-based formulation. The second study uses a detailed model to obtain a design with one pair of closely-spaced frequencies while satisfying requirements on local member frequencies and manufacturing tolerances. Two formulations are developed. Both the constraint-based and the objective function-based formulations perform reasonably well and converge to the same results. However, no feasible design solution exists which satisfies all design requirements for the choices of design variables and the upper and lower design variable values used. More design freedom is needed to achieve a fully satisfactory design. The third study is part of a redesign activity in which a detailed model is used.
NASA Astrophysics Data System (ADS)
Wang, Sen; Fang, H. S.; Jin, Z. L.; Zhao, C. J.; Zheng, L. L.
2014-12-01
Germanium (Ge) is a preferred material in the fabrication of high-performance gamma radiation detector for spectroscopy in nuclear physics. To maintain an intrinsic region in which electrons and holes reach the contacts to produce a spectroscopic signal, germanium crystals are usually doped with lithium (Li) ions. Consequently, hyperpure germanium (HPGe) should be prepared before the doping process to eliminate the interference of unexpected impurities in the Li dopant. Zone-refining technique, widely used in purification of ultra-pure materials, is chosen as one of the purification steps during detector-grade germanium production. In the paper, numerical analysis has been conducted to analyze heat transfer, melt flow and impurity segregation during a multi-pass zone-refining process of germanium in a Cyberstar mirror furnace. By modifying the effective redistribution coefficients, axial segregations of various impurities are investigated. Marangoni convection is found dominant in the melt. It affects the purification process through modifying the boundary layer thickness. Impurity distributions along the ingot are obtained with different conditions, such as pass number, zone travel rate, initial impurity concentration, segregation coefficient, and hot-zone length. Based on the analysis, optimization of the purification process design is proposed.
NASA Technical Reports Server (NTRS)
Granaas, Michael M.; Rhea, Donald C.
1989-01-01
In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.
Spacecraft formation-keeping using a closed-form orbit propagator and optimization technique
NASA Astrophysics Data System (ADS)
No, T. S.; Lee, J. G.; Cochran, J. E., Jr.
2009-08-01
In this paper, a simple method for modeling the relative orbital motion of multiple spacecraft and their formation-keeping control strategy is presented. Power series and trigonometric functions are used to express the relative orbital motion between the member spacecraft. Their coefficients are obtained using least square regression such that the difference between the exact numerically integrated position vector and the approximate vector obtained from the closed-form propagator is minimized. Then, this closed-form orbit propagator and optimization technique is used to plan a series of impulsive maneuvers which maintain the formation configuration within a specified limit. As an example, formation-keeping of four spacecraft is investigated. The motion projected onto the local horizontal plane (along- and cross-track plane) is a circle with the leader satellite located at its center and follower satellites positioned circumferentially. The radial distance between the leader and the followers, and the relative phase angles between the followers are controlled. Results from the nonlinear simulation are presented.
Chang, Kah Haw; Yew, Chong Hooi; Abdullah, Ahmad Fahmi Lim
2014-07-01
Smokeless powders are low explosives and are potentially found in cases involving firearms and improvised explosive devices. Apart from inorganic compound analysis, forensic determination of organic components of these materials appears as a promising alternative, especially the chromatographic techniques. This work describes the optimization of a solid-phase microextraction technique using an 85 μm polyacrylate fiber followed by gas chromatography-flame ionization detection for smokeless powder. A multivariate experimental design was performed to optimize extraction-influencing parameters. A 2(4) factorial first-order design revealed that sample temperature and extraction time were the major influencing parameters. Doehlert matrix design has subsequently selected 66°C and 21 min as the compromised conditions for the two predetermined parameters. This extraction technique has successfully detected the headspace compounds of smokeless powders from different ammunition types and allowed for their differentiation. The novel technique allows more rapid sample preparation for chromatographic detection of smokeless powders. PMID:24611488
Gunavathi, Chellamuthu; Premalatha, Kandasamy
2014-01-01
Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL. PMID:25157377
Gunavathi, Chellamuthu; Premalatha, Kandasamy
2014-01-01
Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL. PMID:25157377
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2016-07-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Sabesan, Shivkumar; Chakravarthy, Niranjan; Tsakalis, Kostas; Pardalos, Panos; Iasemidis, Leon
2009-01-01
Epileptic seizures are manifestations of intermittent spatiotemporal transitions of the human brain from chaos to order. Measures of chaos, namely maximum Lyapunov exponents (STL(max)), from dynamical analysis of the electroencephalograms (EEGs) at critical sites of the epileptic brain, progressively converge (diverge) before (after) epileptic seizures, a phenomenon that has been called dynamical synchronization (desynchronization). This dynamical synchronization/desynchronization has already constituted the basis for the design and development of systems for long-term (tens of minutes), on-line, prospective prediction of epileptic seizures. Also, the criterion for the changes in the time constants of the observed synchronization/desynchronization at seizure points has been used to show resetting of the epileptic brain in patients with temporal lobe epilepsy (TLE), a phenomenon that implicates a possible homeostatic role for the seizures themselves to restore normal brain activity. In this paper, we introduce a new criterion to measure this resetting that utilizes changes in the level of observed synchronization/desynchronization. We compare this criterion's sensitivity of resetting with the old one based on the time constants of the observed synchronization/desynchronization. Next, we test the robustness of the resetting phenomena in terms of the utilized measures of EEG dynamics by a comparative study involving STL(max), a measure of phase (ϕ(max)) and a measure of energy (E) using both criteria (i.e. the level and time constants of the observed synchronization/desynchronization). The measures are estimated from intracranial electroencephalographic (iEEG) recordings with subdural and depth electrodes from two patients with focal temporal lobe epilepsy and a total of 43 seizures. Techniques from optimization theory, in particular quadratic bivalent programming, are applied to optimize the performance of the three measures in detecting preictal entrainment. It is
NASA Astrophysics Data System (ADS)
Mandal, S. K.; Singh, Harshavardhan; Mahanti, G. K.; Ghatak, Rowdra
2014-10-01
This paper presents a new technique based on optimization tools to design phase only, digitally controlled, reconfigurable antenna arrays through time modulation. In the proposed approach, the on-time durations of the time-modulated elements and the static amplitudes of the array elements are perturbed in such a way that the same on-time sequence and discrete values of static amplitudes for four bit digital attenuators produces either a pencil or a flat-top beam pattern, depending on the suitable discrete phase distributions of five bit digital phase shifters. In order to illustrate the technique, three optimization tools: differential evolution (DE), artificial bee colony (ABC), and particle swarm optimization (PSO) are employed and their performances are compared. The numerical results for a 20-element linear array are presented.
76 FR 60494 - Patient Safety Organizations: Voluntary Relinquishment From HPI-PSO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... HUMAN SERVICES Agency for Healthcare Research and Quality Patient Safety Organizations: Voluntary... Organization (PSO). The Patient Safety and Quality Improvement Act of 2005 (Patient Safety Act), Public Law 109... Patient Safety Act authorizes the listing of PSOs, which are entities or component organizations...
Technology Transfer Automated Retrieval System (TEKTRAN)
Immunohistochemical (IHC) and immunofluorescent (IF) techniques were optimized for the detection of foot-and-mouth disease virus (FMDV) structural and non-structural proteins in frozen and paraformaldehyde-fixed paraffin embedded (PFPE) tissues of bovine and porcine origin. Immunohistochemical local...
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
Five techniques that combine the ideals of rotation of matrices of factor loadings to optimal agreement and rotation to simple structure are compared on the basis of empirical and contrived data. Combining a generalized Procrustes analysis with Varimax on the main of the matched loading matrices performed well on all criteria. (SLD)
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Hua; Sheng, Zheng; Shi, Han-Qing
2015-01-01
Estimating refractivity profiles from radar sea clutter is a complex nonlinear optimization problem. To deal with the ill-posed difficulties, an inversion algorithm, particle swarm optimization with a Lévy flight (LPSO), was proposed to be applied in the refractivity from clutter (RFC) technique to retrieve atmospheric duct in this paper. PSO has many advantages in solving continuous optimization problems, while in its late period it has slow convergence speed and low precision. Therefore, we integrate the Lévy flights into the standard PSO algorithm to improve the precision and enhance the capability of jumping out of the local optima. To verify the feasibility and validation of the LPSO for estimating atmospheric duct parameters based on the RFC method, the synthetic and Wallops98 experimental data are implemented. Numerical experiments demonstrate that the optimal solutions obtained from the hybrid algorithm are more precise and efficient. Additionally, to test the algorithm inversion performance, the antinoise ability of LPSO is analyzed. The results indicate that the LPSO algorithm has a certain antinoise ability. Finally, according to the experiment results, it can be concluded that the LPSO algorithm can provide a more precise and efficient method for near-real-time inversion of atmospheric refractivity from radar clutter.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Parameter Identification of Chaotic Systems by a Novel Dual Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Jiang, Yunxiang; Lau, Francis C. M.; Wang, Shiyuan; Tse, Chi K.
In this paper, we propose a dual particle swarm optimization (PSO) algorithm for parameter identification of chaotic systems. We also consider altering the search range of individual particles adaptively according to their objective function value. We consider both noiseless and noisy channels between the original system and the estimation system. Finally, we verify the effectiveness of the proposed dual PSO method by estimating the parameters of the Lorenz system using two different data acquisition schemes. Simulation results show that the proposed method always outperforms the traditional PSO algorithm.
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services. PMID:22492176
A Dynamic Optimization Technique for Siting the NASA-Clark Atlanta Urban Rain Gauge Network (NCURN)
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Taylor, Layi
2003-01-01
NASA satellites and ground instruments have indicated that cities like Atlanta, Georgia may create or alter rainfall. Scientists speculate that the urban heat island caused by man-made surfaces in cities impact the heat and wind patterns that form clouds and rainfall. However, more conclusive evidence is required to substantiate findings from satellites. NASA, along with scientists at Clark Atlanta University, are implementing a dense, urban rain gauge network in the metropolitan Atlanta area to support a satellite validation program called Studies of PRecipitation Anomalies from Widespread Urban Landuse (SPRAWL). SPRAWL will be conducted during the summer of 2003 to further identify and understand the impact of urban Atlanta on precipitation variability. The paper provides an. overview of SPRAWL, which represents one of the more comprehensive efforts in recent years to focus exclusively on urban-impacted rainfall. The paper also introduces a novel technique for deploying rain gauges for SPRAWL. The deployment of the dense Atlanta network is unique because it utilizes Geographic Information Systems (GIS) and Decision Support Systems (DSS) to optimize deployment of the rain gauges. These computer aided systems consider access to roads, drainage systems, tree cover, and other factors in guiding the deployment of the gauge network. GIS and DSS also provide decision-makers with additional resources and flexibility to make informed decisions while considering numerous factors. Also, the new Atlanta network and SPRAWL provide a unique opportunity to merge the high-resolution, urban rain gauge network with satellite-derived rainfall products to understand how cities are changing rainfall patterns, and possibly climate.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Swarm intelligence for multi-objective optimization of synthesis gas production
NASA Astrophysics Data System (ADS)
Ganesan, T.; Vasant, P.; Elamvazuthi, I.; Ku Shaari, Ku Zilati
2012-11-01
In the chemical industry, the production of methanol, ammonia, hydrogen and higher hydrocarbons require synthesis gas (or syn gas). The main three syn gas production methods are carbon dioxide reforming (CRM), steam reforming (SRM) and partial-oxidation of methane (POM). In this work, multi-objective (MO) optimization of the combined CRM and POM was carried out. The empirical model and the MO problem formulation for this combined process were obtained from previous works. The central objectives considered in this problem are methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The MO nature of the problem was tackled using the Normal Boundary Intersection (NBI) method. Two techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) were then applied in conjunction with the NBI method. The performance of the two algorithms and the quality of the solutions were gauged by using two performance metrics. Comparative studies and results analysis were then carried out on the optimization results.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2015-01-01
The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic
Diesel Engine performance improvement in a 1-D engine model using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Karra, Prashanth
2015-12-01
A particle swarm optimization (PSO) technique was implemented to improve the engine development and optimization process to simultaneously reduce emissions and improve the fuel efficiency. The optimization was performed on a 4-stroke 4-cylinder GT-Power based 1-D diesel engine model. To achieve the multi-objective optimization, a merit function was defined which included the parameters to be optimized: Nitrogen Oxides (NOx), Nonmethyl hydro carbons (NMHC), Carbon Monoxide (CO), Brake Specific Fuel Consumption (BSFC). EPA Tier 3 emissions standards for non-road diesel engines between 37 and 75 kW of output were chosen as targets for the optimization. The combustion parameters analyzed in this study include: Start of main Injection, Start of Pilot Injection, Pilot fuel quantity, Swirl, and Tumble. The PSO was found to be very effective in quickly arriving at a solution that met the target criteria as defined in the merit function. The optimization took around 40-50 runs to find the most favourable engine operating condition under the constraints specified in the optimization. In a favourable case with a high merit function values, the NOx+NMHC and CO values were reduced to as low as 2.9 and 0.014 g/kWh, respectively. The operating conditions at this point were: 10 ATDC Main SOI, -25 ATDC Pilot SOI, 0.25 mg of pilot fuel, 0.45 Swirl and 0.85 tumble. These results indicate that late main injections preceded by a close, small pilot injection are most favourable conditions at the operating condition tested.
A new method for ship inner shell optimization based on parametric technique
NASA Astrophysics Data System (ADS)
Yu, Yan-Yun; Lin, Yan; Li, Kai
2015-01-01
A new method for ship Inner Shell optimization, which is called Parametric Inner Shell Optimization Method (PISOM), is presented in this paper in order to improve both hull performance and design efficiency of transport ship. The foundation of PISOM is the parametric Inner Shell Plate (ISP) model, which is a fully-associative model driven by dimensions. A method to create parametric ISP model is proposed, including geometric primitives, geometric constraints, geometric constraint solving etc. The standard optimization procedure of ship ISP optimization based on parametric ISP model is put forward, and an efficient optimization approach for typical transport ship is developed based on this procedure. This approach takes the section area of ISP and the other dominant parameters as variables, while all the design requirements such as propeller immersion, fore bottom wave slap, bridge visibility, longitudinal strength etc, are made constraints. The optimization objective is maximum volume of cargo oil tanker/cargo hold, and the genetic algorithm is used to solve this optimization model. This method is applied to the optimization of a product oil tanker and a bulk carrier, and it is proved to be effective, highly efficient, and engineering practical.
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
Optimal design of plate-fin heat exchangers by particle swarm optimization
NASA Astrophysics Data System (ADS)
Yousefi, M.; Darus, A. N.
2011-12-01
This study explores the application of Particle Swarm Optimization (PSO) for optimization of a cross-flow plate fin heat exchanger. Minimization total annual cost is the target of optimization. Seven design parameters, namely, heat exchanger length at hot and cold sides, fin height, fin frequency, fin thickness, fin-strip length and number of hot side layers are selected as optimization variables. A case study from the literature proves the effectiveness of the proposed algorithm in case of achieving more accurate results.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Munari, Fernanda M; Revers, Luis F; Cardone, Jacqueline M; Immich, Bruna F; Moura, Dinara J; Guecheva, Temenouga N; Bonatto, Diego; Laurino, Jomar P; Saffi, Jenifer; Brendel, Martin; Henriques, João A P
2014-01-01
By isolating putative binding partners through the two-hybrid system (THS) we further extended the characterization of the specific interstrand cross-link (ICL) repair gene PSO2 of Saccharomyces cerevisiae. Nine fusion protein products were isolated for Pso2p using THS, among them the Sak1 kinase, which interacted with the C-terminal β-CASP domain of Pso2p. Comparison of mutagen-sensitivity phenotypes of pso2Δ, sak1Δ and pso2Δsak1Δ disruptants revealed that SAK1 is necessary for complete WT-like repair. The epistatic interaction of both mutant alleles suggests that Sak1p and Pso2p act in the same pathway of controlling sensitivity to DNA-damaging agents. We also observed that Pso2p is phosphorylated by Sak1 kinase in vitro and co-immunoprecipitates with Sak1p after 8-MOP+UVA treatment. Survival data after treatment of pso2Δ, yku70Δ and yku70Δpso2Δ with nitrogen mustard, PSO2 and SAK1 with YKU70 or DNL4 single-, double- and triple mutants with 8-MOP+UVA indicated that ICL repair is independent of YKu70p and DNL4p in S. cerevisiae. Furthermore, a non-epistatic interaction was observed between MRE11, PSO2 and SAK1 genes after ICL induction, indicating that their encoded proteins act on the same substrate, but in distinct repair pathways. In contrast, an epistatic interaction was observed for PSO2 and RAD52, PSO2 and RAD50, PSO2 and XRS2 genes in 8-MOP+UVA treated exponentially growing cells. PMID:24362320
Unit Commitment by Adaptive Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa
This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.
An analytic study of near terminal area optimal sequencing and flow control techniques
NASA Technical Reports Server (NTRS)
Park, S. K.; Straeter, T. A.; Hogge, J. E.
1973-01-01
Optimal flow control and sequencing of air traffic operations in the near terminal area are discussed. The near terminal area model is based on the assumptions that the aircraft enter the terminal area along precisely controlled approach paths and that the aircraft are segregated according to their near terminal area performance. Mathematical models are developed to support the optimal path generation, sequencing, and conflict resolution problems.
NASA Astrophysics Data System (ADS)
St. Germain, Brad David
The development and optimization of liquid rocket engines is an integral part of space vehicle design, since most Earth-to-orbit launch vehicles to date have used liquid rockets as their main propulsion system. Rocket engine design tools range in fidelity from very simple conceptual level tools to full computational fluid dynamics (CFD) simulations. The level of fidelity of interest in this research is a design tool that determines engine thrust and specific impulse as well as models the powerhead of the engine. This is the highest level of fidelity applicable to a conceptual level design environment where faster running analyses are desired. The optimization of liquid rocket engines using a powerhead analysis tool is a difficult problem, because it involves both continuous and discrete inputs as well as a nonlinear design space. Example continuous inputs are the main combustion chamber pressure, nozzle area ratio, engine mixture ratio, and desired thrust. Example discrete variable inputs are the engine cycle (staged-combustion, gas generator, etc.), fuel/oxidizer combination, and engine material choices. Nonlinear optimization problems involving both continuous and discrete inputs are referred to as Mixed-Integer Nonlinear Programming (MINLP) problems. Many methods exist in literature for solving MINLP problems; however none are applicable for this research. All of the existing MINLP methods require the relaxation of the discrete variables as part of their analysis procedure. This means that the discrete choices must be evaluated at non-discrete values. This is not possible with an engine powerhead design code. Therefore, a new optimization method was developed that uses modified response surface equations to provide lower bounds of the continuous design space for each unique discrete variable combination. These lower bounds are then used to efficiently solve the optimization problem. The new optimization procedure was used to find optimal rocket engine designs
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
Kumari, Jayanti; Negi, Sangeeta
2014-11-01
For cost effective production of laccase enzyme (benzenediol: oxygen oxidoreductase) from P. ostreatus MTCC 1802 through solid sate fermentation, physico-chemical parameters such as temperature (20-35 degrees C), incubation period (9-17 days) and substrate (Neem bark and wheat bran, in various ratios, w/w) were optimized first by one parameter at time approach and then obtained optimum conditions were considered as zero level in evolutionary optimization factorial design technique. At statistically optimized conditions yield of laccase was found 303.59 + 16.8) U/gds after 13 days of incubation at 25 degrees C taking wheat bran and neem bark as substrate at a ratio of 3:2 (w/w). The results obtained could be a base line for industrial scale production of laccase. PMID:25434106
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
NASA Astrophysics Data System (ADS)
Rahman, Md Ashiqur; Anwar, Sohel; Izadian, Afshin
2016-03-01
In this paper, a gradient-free optimization technique, namely particle swarm optimization (PSO) algorithm, is utilized to identify specific parameters of the electrochemical model of a Lithium-Ion battery with LiCoO2 cathode chemistry. Battery electrochemical model parameters are subject to change under severe or abusive operating conditions resulting in, for example, over-discharged battery, over-charged battery, etc. It is important for a battery management system to have these parameter changes fully captured in a bank of battery models that can be used to monitor battery conditions in real time. Here the PSO methodology has been successfully applied to identify four electrochemical model parameters that exhibit significant variations under severe operating conditions: solid phase diffusion coefficient at the positive electrode (cathode), solid phase diffusion coefficient at the negative electrode (anode), intercalation/de-intercalation reaction rate at the cathode, and intercalation/de-intercalation reaction rate at the anode. The identified model parameters were used to generate the respective battery models for both healthy and degraded batteries. These models were then validated by comparing the model output voltage with the experimental output voltage for the stated operating conditions. The identified Li-Ion battery electrochemical model parameters are within reasonable accuracy as evidenced by the experimental validation results.
Controller design based on μ analysis and PSO algorithm.
Lari, Ali; Khosravi, Alireza; Rajabi, Farshad
2014-03-01
In this paper an evolutionary algorithm is employed to address the controller design problem based on μ analysis. Conventional solutions to μ synthesis problem such as D-K iteration method often lead to high order, impractical controllers. In the proposed approach, a constrained optimization problem based on μ analysis is defined and then an evolutionary approach is employed to solve the optimization problem. The goal is to achieve a more practical controller with lower order. A benchmark system named two-tank system is considered to evaluate performance of the proposed approach. Simulation results show that the proposed controller performs more effective than high order H(∞) controller and has close responses to the high order D-K iteration controller as the common solution to μ synthesis problem. PMID:24314832
Converting PSO dynamics into complex network - Initial study
NASA Astrophysics Data System (ADS)
Pluhacek, Michal; Janostik, Jakub; Senkerik, Roman; Zelinka, Ivan
2016-06-01
In this paper it is presented the initial study on the possibility of capturing the inner dynamic of Particle Swarm Optimization algorithm into a complex network structure. Inspired in previous works there are two different approaches for creating the complex network presented in this paper. Visualizations of the networks are presented and commented. The possibilities for future applications of the proposed design are given in detail.
Bang, Soonam; Heo, Joon; Han, Soohee; Sohn, Hong-Gyoo
2010-01-01
Infiltration-route analysis is a military application of geospatial information system (GIS) technology. In order to find susceptible routes, optimal-path-searching algorithms are applied to minimize the cost function, which is the summed result of detection probability. The cost function was determined according to the thermal observation device (TOD) detection probability, the viewshed analysis results, and two feature layers extracted from the vector product interim terrain data. The detection probability is computed and recorded for an individual cell (50 m × 50 m), and the optimal infiltration routes are determined with A* algorithm by minimizing the summed costs on the routes from a start point to an end point. In the present study, in order to simulate the dynamic nature of a real-world problem, one thousand cost surfaces in the GIS environment were generated with randomly located TODs and randomly selected infiltration start points. Accordingly, one thousand sets of vulnerable routes for infiltration purposes could be found, which could be accumulated and presented as an infiltration vulnerability map. This application can be further utilized for both optimal infiltration routing and surveillance network design. Indeed, dynamic simulation in the GIS environment is considered to be a powerful and practical solution for optimization problems. A similar approach can be applied to the dynamic optimal routing for civil infrastructure, which requires consideration of terrain-related constraints and cost functions. PMID:22315544
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-01-01
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112
Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun
2016-01-01
Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112
Oxidation of low calorific value gases -- Applying optimization techniques to combustor design
Gemmen, R.S.
1998-07-01
The design of an optimal air-staged combustor for the oxidation of a low calorific value gas mixture is presented. The focus is on the residual fuel emitted from the anode of a molten carbonate fuel-cell. Both experimental and numerical results are presented. The simplified numerical model considers a series of plug-flow-reactor sections, with the possible addition of a perfectly-stirred-reactor. The parameter used for optimization, Z, is the sum of fuel-component molar flow rates leaving a particular combustor section. An optimized air injection profile is one that minimizes Z for a given combustor length and inlet condition. Since a mathematical proof describing the significance of global interactions remains lacking, the numerical model employs both a Local optimization procedure and a Global optimization procedure. The sensitivity of Z to variations in the air injection profile and inlet temperature is also examined. The results show that oxidation of the anode exhaust gas is possible with low pollutant emissions.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Enabling a viable technique for the optimization of LNG carrier cargo operations
NASA Astrophysics Data System (ADS)
Alaba, Onakoya Rasheed; Nwaoha, T. C.; Okwu, M. O.
2016-07-01
In this study, we optimize the loading and discharging operations of the Liquefied Natural Gas (LNG) carrier. First, we identify the required precautions for LNG carrier cargo operations. Next, we prioritize these precautions using the analytic hierarchy process (AHP) and experts' judgments, in order to optimize the operational loading and discharging exercises of the LNG carrier, prevent system failure and human error, and reduce the risk of marine accidents. Thus, the objective of our study is to increase the level of safety during cargo operations.
Selectively-informed particle swarm optimization.
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
NASA Astrophysics Data System (ADS)
Gao, Yang; Du, Wenbo; Yan, Gang
2015-03-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
NASA Astrophysics Data System (ADS)
Wang, Shilong; Xu, Yuru; Pang, Yongjie
2011-03-01
The S/N of an underwater image is low and has a fuzzy edge. If using traditional methods to process it directly, the result is not satisfying. Though the traditional fuzzy C-means algorithm can sometimes divide the image into object and background, its time-consuming computation is often an obstacle. The mission of the vision system of an autonomous underwater vehicle (AUV) is to rapidly and exactly deal with the information about the object in a complex environment for the AUV to use the obtained result to execute the next task. So, by using the statistical characteristics of the gray image histogram, a fast and effective fuzzy C-means underwater image segmentation algorithm was presented. With the weighted histogram modifying the fuzzy membership, the above algorithm can not only cut down on a large amount of data processing and storage during the computation process compared with the traditional algorithm, so as to speed up the efficiency of the segmentation, but also improve the quality of underwater image segmentation. Finally, particle swarm optimization (PSO) described by the sine function was introduced to the algorithm mentioned above. It made up for the shortcomings that the FCM algorithm can not get the global optimal solution. Thus, on the one hand, it considers the global impact and achieves the local optimal solution, and on the other hand, further greatly increases the computing speed. Experimental results indicate that the novel algorithm can reach a better segmentation quality and the processing time of each image is reduced. They enhance efficiency and satisfy the requirements of a highly effective, real-time AUV.
NASA Technical Reports Server (NTRS)
Martini, William R.
1989-01-01
A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.
Application of Numerical Optimization Technique to Design of Forward-Curved Blades Centrifugal Fan
NASA Astrophysics Data System (ADS)
Kim, Kwang-Yong; Seo, Seoung-Jin
This paper presents the response surface optimization method using three-dimensional Navier-Stokes analysis to optimize the shape of a forward-curved blades centrifugal fan. For numerical analysis, Reynolds-averaged Navier-Stokes equations with k-ɛ turbulence model are discretized with finite volume approximations. In order to reduce huge computing time due to a large number of blades in forward-curved blades centrifugal fan, the flow inside of the fan is regarded as steady flow by introducing the impeller force models. Three geometric variables, i.e., location of cut off, radius of cut off, and width of impeller, and one operating variable, i.e., flow rate, were selected as design variables. As a main result of the optimization, the efficiency was successfully improved. And, optimum design flow rate was found by using flow rate as one of design variables. It was found that the optimization process provides reliable design of this kind of fans with reasonable computing time.
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
General optimization technique for high-quality community detection in complex networks
NASA Astrophysics Data System (ADS)
Sobolevsky, Stanislav; Campari, Riccardo; Belyi, Alexander; Ratti, Carlo
2014-07-01
Recent years have witnessed the development of a large body of algorithms for community detection in complex networks. Most of them are based upon the optimization of objective functions, among which modularity is the most common, though a number of alternatives have been suggested in the scientific literature. We present here an effective general search strategy for the optimization of various objective functions for community detection purposes. When applied to modularity, on both real-world and synthetic networks, our search strategy substantially outperforms the best existing algorithms in terms of final scores of the objective function. In terms of execution time for modularity optimization this approach also outperforms most of the alternatives present in literature with the exception of fastest but usually less efficient greedy algorithms. The networks of up to 30000 nodes can be analyzed in time spans ranging from minutes to a few hours on average workstations, making our approach readily applicable to tasks not limited by strict time constraints but requiring the quality of partitioning to be as high as possible. Some examples are presented in order to demonstrate how this quality could be affected by even relatively small changes in the modularity score stressing the importance of optimization accuracy.
Programmable System-on-Chip (PSoC) Embedded Readout Designs for Liquid Helium Level Sensors.
Parasakthi, C; Gireesan, K; Usha Rani, R; Sheela, O K; Janawadkar, M P
2014-01-24
This article reports the development of programmable system-on-chip (PSoC)-based embedded readout designs for liquid helium level sensors using resistive liquid vapor discriminators. The system has been built for the measurement of liquid helium level in a concave-bottomed, helmet-shaped, fiber-reinforced plastic cryostat for magnetoencephalography. This design incorporates three carbon resistors as cost-effective sensors, which are mounted at desired heights inside the cryostat and were used to infer the liquid helium level by measuring their temperature-dependent resistance. Localized electrical heating of the carbon resistors was used to discriminate whether the resistor is immersed in liquid helium or its vapor by exploiting the difference in the heat transfer rates in the two environments. This report describes a single PSoC chip for the design and development of a constant current source to drive the three carbon resistors, a multiplexer to route the sensor outputs to the analog-to-digital converter (ADC), a buffer to avoid loading of the sensors, an ADC for digitizing the data, and a display using liquid crystal display cum light-emitting diode modules. The level sensor readout designed with a single PSoC chip enables cost-effective and reliable measurement system design. PMID:24464811
PsoP1, a milk-clotting aspartic peptidase from the basidiomycete fungus Piptoporus soloniensis.
El-Baky, Hassan Abd; Linke, Diana; Nimtz, Manfred; Berger, Ralf Günter
2011-09-28
The first enzyme of the basidiomycete Piptoporus soloniensis, a peptidase (PsoP1), was characterized after isolation from submerged cultures, purification by fractional precipitation, and preparative native-polyarylamide gel electrophoresis (PAGE). The native molecular mass of PsoP1 was 38 kDa with an isoelectric point of 3.9. Similar to chymosin from milk calves, PsoP1 showed a maximum milk-clotting activity (MCA) at 35-40 °C and was most stable at pH 6 and below 40 °C. The complete inhibition by pepstatin A identified this enzyme as an aspartic peptidase. Electrospray ionization-tandem MS showed an amino acid partial sequence that was more homologous to mammalian milk clotting peptidases than to the chymosin substitute from a fungal species, such as the Zygomycete Mucor miehei. According to sodium dodecyl sulfate-PAGE patterns, the peptidase cleaved κ-casein in a way similar to chymosin and hydrolyzed β-casein slowly, as it would be expected from an efficient chymosin substitute. PMID:21888369
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Bréchet, Thierry; Tulkens, Henry
2009-04-01
Technological choices are multi-dimensional and thus one needs a multi-dimensional methodology to identify best available techniques. Moreover, in the presence of environmental externalities generated by productive activities, 'best' available techniques should be best from Society's point of view, not only in terms of private interests. In this paper we present a modeling framework based on methodologies appropriate to serve these two purposes, namely linear programming and internalization of external costs. We develop it as an operational decision tool, of interest for both firms and regulators, and we apply it to a plant in the lime industry. We show why, in this context, there is in general not a single best available technique (BAT), but well a best combination of available techniques to be used (BCAT). PMID:19108944
Optimization of the tungsten oxide technique for measurement of atmospheric ammonia
NASA Technical Reports Server (NTRS)
Brown, Kenneth G.
1987-01-01
Hollow tubes coated with tungstic acid have been shown to be of value in the determination of ammonia and nitric acid in ambient air. Practical application of this technique was demonstrated utilizing an automated sampling system for in-flight collection and analysis of atmospheric samples. Due to time constraints these previous measurements were performed on tubes that had not been well characterized in the laboratory. As a result the experimental precision could not be accurately estimated. Since the technique was being compared to other techniques for measuring these compounds, it became necessary to perform laboratory tests which would establish the reliability of the technique. This report is a summary of these laboratory experiments as they are applied to the determination of ambient ammonia concentration.
Use of the particle swarm optimization algorithm for second order design of levelling networks
NASA Astrophysics Data System (ADS)
Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer
2009-08-01
The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.
NASA Astrophysics Data System (ADS)
Verma, Harish Kumar; Jain, Cheshta
2015-07-01
In this article, a hybrid algorithm of particle swarm optimization (PSO) with statistical parameter (HSPSO) is proposed. Basic PSO for shifted multimodal problems have low searching precision due to falling into a number of local minima. The proposed approach uses statistical characteristics to update the velocity of the particle to avoid local minima and help particles to search global optimum with improved convergence. The performance of the newly developed algorithm is verified using various standard multimodal, multivariable, shifted hybrid composition benchmark problems. Further, the comparative analysis of HSPSO with variants of PSO is tested to control frequency of hybrid renewable energy system which comprises solar system, wind system, diesel generator, aqua electrolyzer and ultra capacitor. A significant improvement in convergence characteristic of HSPSO algorithm over other variants of PSO is observed in solving benchmark optimization and renewable hybrid system problems.
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach. PMID:25140288
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Green, Lawrence L.
1999-01-01
A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the design of complex engineering systems and subsystems that coherently exploits the synergism of mutually interacting phenomena. As evidenced by the papers, which appear in the biannual AIAA/USAF/NASA/ISSMO Symposia on Multidisciplinary Analysis and Optimization, the MDO technical community focuses on vehicle and system design issues. This paper provides an overview of the MDO technology field from a fluid dynamics perspective, giving emphasis to suggestions of specific applications of recent MDO technologies that can enhance fluid dynamics research itself across the spectrum, from basic flow physics to full configuration aerodynamics.
Zhao, Sijun; Li, Xuelian; Ra, Younkyoung; Li, Cun; Jiang, Haiyang; Li, Jiancheng; Qu, Zhina; Zhang, Suxia; He, Fangyang; Wan, Yuping; Feng, Caiwei; Zheng, Zengren; Shen, Jianzhong
2009-01-28
An immunoaffinity chromatographic method was developed using an antibody mediated immunosorbent to selectively extract and purify 10 quinolones (marbofloxacin, norfloxacin, ciprofloxacin, lomefloxacin, danofloxacin, enrofloxacin, difloxacin, sarafloxacin, oxolinic acid, and flumequine) in chicken muscle followed by HPLC. The operating conditions of the immunoaffinity chromatography (IAC) column were optimized, and the IAC has been successfully used for the isolation and purification of 10 quinolones from chicken muscle tissue. The optimized immunoaffinity column sample cleanup procedure combined with HPLC coupling to fluorescence detection afforded low limits of detection (0.1 ng g(-1) for danfloxacin and 0.15 ng g(-1) for all other quinolones tested). The method was also applied to determine quinolone residues in commercial muscle samples. PMID:19119842
NASA Technical Reports Server (NTRS)
Levy, R.; Chai, K.
1978-01-01
A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
Singular perturbation techniques for on-line optimal flight path control
NASA Technical Reports Server (NTRS)
Calise, A. J.
1979-01-01
This paper presents a partial evaluation on the use of singular perturbation methods for developing computer algorithms for on-line optimal control of aircraft. The evaluation is based on a study of the minimum time intercept problem using F-4 aerodynamic and propulsion data as a base line. The extensions over previous work on this subject are that aircraft turning dynamics (in addition to position and energy dynamics) are included in the analysis, the algorithm is developed for a moving end point and is adaptive to unpredictable target maneuvers, and short range maneuvers that do not have a cruise leg are included. Particular attention is given to identifying those quantities that can be precomputed and stored (as a function of aircraft total energy), thus greatly reducing the onboard computational load. Numerical results are given that illustrate the nature of the optimal intercept flight paths, and an estimate is given for the execution time and storage requirements of the control algorithm.
Research of Arc Chamber Optimization Techniques Based on Flow Field and Arc Joint Simulation
NASA Astrophysics Data System (ADS)
Zhong, Jianying; Guo, Yujing; Zhang, Hao
2016-03-01
The preliminary design of an arc chamber in the 550 kV SF6 circuit breaker was proposed in accordance with the technical requirements and design experience. The structural optimization was carried out according to the no-load flow field simulation results and verified by no-load pressure measurement. Based on load simulation results such as temperature field variation at the arc area and the tendency of post arc current under different recovery voltage, the second optimal design was completed and its correctness was certificated by a breaking test. Results demonstrate that the interrupting capacity of an arc chamber can be evaluated by the comparison of the gas medium recovery speed and post arc current growth rate.
Inversion of seismological data using a controlled random search global optimization technique
NASA Astrophysics Data System (ADS)
Shanker, K.; Mohan, C.; Khattri, K. N.
1991-11-01
Inversion problems in seismology deal with the estimation of the location and the time of occurrence of an earthquake from observations of the arrival time of the body waves. These problems can be regarded as non-linear optimization problems in which the objective function to be minimized is the discrepancy between the recorded arrival times and the calculated arrival times at a prescribed set of observation stations, as a function of the hypocentral parameters and the wave speed structure of the Earth. The objective of the present paper is to demonstrate the effectiveness of a controlled random search algorithm of global optimization (Shanker and Mohan, 1987; Mohan and Shanker, 1988) in solving such types of inversion problems. The performance of the algorithm has been tested on earthquake arrival time data of earthquakes recorded in the vicinity of local networks in the Garhwal Kumaon region of the Himalayas.
NASA Astrophysics Data System (ADS)
Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.
2013-11-01
Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.
Demonstration of optimization techniques for groundwater plumeremediation using iTOUGH2
Finsterle, Stefan
2004-11-11
We examined the potential use of standard optimization algorithms as implemented in the inverse modeling code iTOUGH2 (Finsterle, 1999abc) for the solution of aquifer remediation problems. Costs for the removal of dissolved or free-phase contaminants depend on aquifer properties, the chosen remediation technology, and operational parameters (such as number of wells drilled and pumping rates). A cost function must be formulated that may include actual costs and hypothetical penalty costs for incomplete cleanup; the total cost function is therefore a measure of the overall effectiveness and efficiency of the proposed remediation scenario. The cost function is then minimized by automatically adjusting certain decision or operational parameters. We evaluate the impact of these operational parameters on remediation using a three-phase, three-component flow and transport simulator, which is linked to nonlinear optimization routines. We demonstrate that the methods developed for automatic model calibration are capable of minimizing arbitrary cost functions. An example of co-injection of air and steam makes evident the need for coupling optimization routines with an accurate state-of-the-art process simulator. Simplified models are likely to miss significant system behaviors such as increased downward mobilization due to recondensation of contaminants during steam flooding, which can be partly suppressed by the co-injection of air.
A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers
NASA Astrophysics Data System (ADS)
Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.
Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.
Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A
2016-04-12
The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired. PMID:26930135
Mackey-Glass noisy chaotic time series prediction by a swarm-optimized neural network
NASA Astrophysics Data System (ADS)
López-Caraballo, C. H.; Salfate, I.; Lazzús, J. A.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2016-05-01
In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass noiseless chaotic time series in the short-term and long-term prediction. The performance prediction is evaluated and compared with similar work in the literature, particularly for the long-term forecast. Also, we present properties of the dynamical system via the study of chaotic behaviour obtained from the time series prediction. Then, this standard hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions that also allowed us compute uncertainties of predictions for noisy Mackey-Glass chaotic time series. We study the impact of noise for three cases with a white noise level (σ N ) contribution of 0.01, 0.05 and 0.1.
Resection of Diminutive and Small Colorectal Polyps: What Is the Optimal Technique?
Lee, Jun
2016-01-01
Colorectal polyps are classified as neoplastic or non-neoplastic on the basis of malignant potential. All neoplastic polyps should be completely removed because both the incidence of colorectal cancer and the mortality of colorectal cancer patients have been found to be strongly correlated with incomplete polypectomy. The majority of colorectal polyps discovered on diagnostic colonoscopy are diminutive and small polyps; therefore, complete resection of these polyps is very important. However, there is no consensus on a method to remove diminutive and small polyps, and various techniques have been adopted based on physician preference. The aim of this article was to review the diverse techniques used to remove diminutive and small polyps and to suggest which technique will be the most effective. PMID:27450226
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
NASA Astrophysics Data System (ADS)
Faria, Paula
2010-09-01
For the past few years, the potential of transcranial direct current stimulation (tDCS) for the treatment of several pathologies has been investigated. Knowledge of the current density distribution is an important factor in optimizing such applications of tDCS. For this goal, we used the finite element method to solve the Laplace equation in a spherical head model in order to investigate the three dimensional distribution of the current density and the variation of its intensity with depth using different electrodes montages: the traditional one with two sponge electrodes and new electrode montages: with sponge and EEG electrodes and with EEG electrodes varying the numbers of electrodes. The simulation results confirm the effectiveness of the mixed system which may allow the use of tDCS and EEG recording concomitantly and may help to optimize this neuronal stimulation technique. The numerical results were used in a promising application of tDCS in epilepsy.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
2012-01-01
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R(2) = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful. PMID:22942683
NASA Technical Reports Server (NTRS)
Kuhlman, J.
1979-01-01
A theoretical method was developed for determining the optimum span load distribution for minimum induced drag for subsonic nonplanar configurations. The undistorted wing wake is assumed to have piecewise linear variation of shed vortex sheet strength, resulting in a quadratic variation of bound circulation and span load. The optimum loading is obtained either through a direct technique, whereby derivatives of the drag expression are calculated analytically in terms of the unknown wake vortex sheet strengths. Both techniques agree well with each other and with available exact solutions for minimum induced drag.
Matros, Evan; Albornoz, Claudia R; Rensberger, Michael; Weimer, Katherine; Garfein, Evan S
2014-06-01
There is increased clinical use of computer-assisted design (CAD) and computer-assisted modeling (CAM) for osseous flap reconstruction, particularly in the head and neck region. Limited information exists about methods to optimize the application of this new technology and for cases in which it may be advantageous over existing methods of osseous flap shaping. A consecutive series of osseous reconstructions planned with CAD/CAM over the past 5 years was analyzed. Conceptual considerations and refinements in the CAD/CAM process were evaluated. A total of 48 reconstructions were performed using CAD/CAM. The majority of cases were performed for head and neck tumor reconstruction or related complications whereas the remainder (4%) were performed for penetrating trauma. Defect location was the mandible (85%), maxilla (12.5%), and pelvis (2%). Reconstruction was performed immediately in 73% of the cases and delayed in 27% of the cases. The mean number of osseous flap bone segments used in reconstruction was 2.41. Areas of optimization include the following: mandible cutting guide placement, osteotomy creation, alternative planning, and saw blade optimization. Identified benefits of CAD/CAM over current techniques include the following: delayed timing, anterior mandible defects, specimen distortion, osteotomy creation in three dimensions, osteotomy junction overlap, plate adaptation, and maxillary reconstruction. Experience with CAD/CAM for osseous reconstruction has identified tools for technique optimization and cases where this technology may prove beneficial over existing methods. Knowledge of these facts may contribute to improved use and main-stream adoption of CAD/CAM virtual surgical planning by reconstructive surgeons. PMID:24323480
Modenese, Luca; Ceseracciu, Elena; Reggiani, Monica; Lloyd, David G
2016-01-25
A challenging aspect of subject specific musculoskeletal modeling is the estimation of muscle parameters, especially optimal fiber length and tendon slack length. In this study, the method for scaling musculotendon parameters published by Winby et al. (2008), J. Biomech. 41, 1682-1688, has been reformulated, generalized and applied to two cases of practical interest: 1) the adjustment of muscle parameters in the entire lower limb following linear scaling of a generic model and 2) their estimation "from scratch" in a subject specific model of the hip joint created from medical images. In the first case, the procedure maintained the muscles׳ operating range between models with mean errors below 2.3% of the reference model normalized fiber length value. In the second case, a subject specific model of the hip joint was created using segmented bone geometries and muscle volumes publicly available for a cadaveric specimen from the Living Human Digital Library (LHDL). Estimated optimal fiber lengths were found to be consistent with those of a previously published dataset for all 27 considered muscle bundles except gracilis. However, computed tendon slack lengths differed from tendon lengths measured in the LHDL cadaver, suggesting that tendon slack length should be determined via optimization in subject-specific applications. Overall, the presented methodology could adjust the parameters of a scaled model and enabled the estimation of muscle parameters in newly created subject specific models. All data used in the analyses are of public domain and a tool implementing the algorithm is available at https://simtk.org/home/opt_muscle_par. PMID:26776930
Bichacho, N
1998-10-01
The role of prosthetic restorations in the final appearance of the surrounding soft tissues has long been recognized. Innovative prosthodontic concepts as described should be used to enhance the biologic as well as the esthetic data of the supporting tissues, in natural teeth and implants alike. Combined dental treatment modalities of different kinds (i.e., orthodontics, periodontal treatment) are often required for optimal results. Meticulous care and attention to the delicate soft tissues should be given throughout all phases of the treatment, with a view to achieving a functional, healthy, and esthetic oral environment. PMID:9891656
NASA Astrophysics Data System (ADS)
Nasshorudin, Dalila; Ahmad, Muhammad Syarhabil; Mamat, Awang Soh; Rosli, Suraya
2015-05-01
Solventless extraction process of Chromalaena odorata using reduced pressure and temperature has been investigated. The percentage yield of essential oil produce was calculated for every experiment with different experimental condition. The effect of different parameters, such as temperature and extraction time on the yield was investigated using the Response Surface Methodology (RSM) through Central Composite Design (CCD). The temperature and extraction time were found to have significant effect on the yield of extract. A final essential oil yield was 0.095% could be extracted under the following optimized conditions; a temperature of 80 °C and a time of 8 hours.
Toward a systematic design theory for silicon solar cells using optimization techniques
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
This work is a first detailed attempt to systematize the design of silicon solar cells. Design principles follow from three theorems. Although the results hold only under low injection conditions in base and emitter regions, they hold for arbitrary doping profiles and include the effects of drift fields, high/low junctions and heavy doping concentrations of donor or acceptor atoms. Several optimal designs are derived from the theorems, one of which involves a three-dimensional morphology in the emitter region. The theorems are derived from a nonlinear differential equation of the Riccati form, the dependent variable of which is a normalized recombination particle current.
Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.
ERIC Educational Resources Information Center
Bose, Anindya
The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…
Technology Transfer Automated Retrieval System (TEKTRAN)
This study evaluated the impact of gas concentration and wind sensor locations on the accuracy of the backward Lagrangian stochastic inverse-dispersion technique (bLS) for measuring gas emission rates from a typical lagoon environment. Path-integrated concentrations (PICs) and 3-dimensional (3D) wi...
Ruiz-Cruz, Riemann; Sanchez, Edgar N; Ornelas-Tellez, Fernando; Loukianov, Alexander G; Harley, Ronald G
2013-12-01
In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms. PMID:24273145
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus. PMID:27169290
Optimal performance receiving ranging system model and realization using TCSPC technique
NASA Astrophysics Data System (ADS)
Shen, Shanshan; Chen, Qian; He, Weiji; Zhou, Pin; Gu, Guohua
2015-10-01
In this paper, the short dead time detection probability is introduced to the linear SNR model of fixed frequency multipulse accumulation detection system. Monte Carlo simulation is consistent with the theory simulation, which proves that with the increased laser power, the SNR first gets larger quickly and then becomes stable. Then the range standard deviation model is settled and firstly shows that larger dead time brings better range precision, which is consistent with the B. I. Cantor's research. Secondly, Monte Carlo simulation and theory simulation both indicate that with the increased laser power, range precision first enhances and then becomes stable. Experimental results show that based on 500000c/s high background noise, the maximum of SNR can be obtained with emitting laser power at about 400uw at 50ms integrated time. Range precision reaches the optimal level at 6mm. The experimental data show a precision which is always worse than the Monte Carlo simulated results. This arises from the fact that the histograms' jitter is not taking into account and introduced during simulation, whereas the experimental system has approximately 500ps' jitter. The system jitter causes larger time stamp value fluctuation, leading to worse range precision. To sum up, theory and experiment all prove that the optimal performance receiving of SNR and precision is achieved on this multi-pulse accumulation detection system.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
Nguyen, Bao D; Meng, Xi; Donovan, Kevin J; Shaka, A J
2007-02-01
Excitation sculpting, a general method to suppress unwanted magnetization while controlling the phase of the retained signal [T.L. Hwang, A.J. Shaka, Water suppression that works. Excitation sculpting using arbitrary waveforms and pulsed field gradients, J. Magn. Reson. Ser. A 112 (1995) 275-279] is a highly effective method of water suppression for both biological and small molecule NMR spectroscopy. In excitation sculpting, a double pulsed field gradient spin echo forms the core of the sequence and pairing a low-power soft 180 degrees (-x) pulse with a high-power 180 degrees (x) all resonances except the water are flipped and retained, while the water peak is attenuated. By replacing the hard 180 degrees pulse in the double echo with a new phase-alternating composite pulse, broadband and adjustable excitation of large bandwidths with simultaneous high water suppression is obtained. This "Solvent-Optimized Gradient-Gradient Spectroscopy" (SOGGY) sequence is a reliable workhorse method for a wide range of practical situations in NMR spectroscopy, optimizing both solute sensitivity and water suppression. PMID:17126049
NASA Astrophysics Data System (ADS)
Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang
2011-10-01
The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality.
Wan, Li; Huang, Jun
2014-01-01
The PSO4 core complex is composed of PSO4/PRP19/SNEV, CDC5L, PLRG1, and BCAS2/SPF27. Besides its well defined functions in pre-mRNA splicing, the PSO4 complex has been shown recently to participate in the DNA damage response. However, the specific role for the PSO4 complex in the DNA damage response pathways is still not clear. Here we show that both the BCAS2 and PSO4 subunits of the PSO4 complex directly interact and colocalize with replication protein A (RPA). Depletion of BCAS2 or PSO4 impairs the recruitment of ATR-interacting protein (ATRIP) to DNA damage sites and compromises CHK1 activation and RPA2 phosphorylation. Moreover, we demonstrate that both the RPA1-binding ability of BCAS2 and the E3 ligase activity of PSO4 are required for efficient accumulation of ATRIP at DNA damage sites and the subsequent CHK1 activation and RPA2 phosphorylation. Our results suggest that the PSO4 complex functionally interacts with RPA and plays an important role in the DNA damage response. PMID:24443570
NASA Astrophysics Data System (ADS)
Xi, Lu; Xin, Zhou; Sheng, Yuan; Xiao-feng, Li; Pei, Lu; Yao-yao, Chen
2009-09-01
A new method based on statistical hypothesis detection for information hidden using the double random-phase encoding technique is introduced. According to this method, a series of windows are opened on the lowest bit-plane of image, and an exclusive OR (XOR) operation is performed between different pixels in every window. The results of XOR operation are then analyzed. Using this method, we can judge whether an image contains secret information encrypted by the double random-phase encoding technique. The result of the judgment may be influenced by two parameters, namely the size of the window and the threshold value. A further study is also made to determine the optimal parameters.
Lee, H.-R.; DaSilva, L.; Haddad, L.; Trebes, J.; Yeh, Y.; Ford, G.
1995-06-14
The Iterative Optimizing Quantization Technique (IOQT) is a novel method in reconstructing three-dimensional images from a limited number of two-dimensional projections. IOQT reduces the artifacts and image distortion due to a limited number of projections and limited range of viewing angles. IOQT, which reduces the number of projections required for reconstruction, can simplify the complexity of an experimental set-up and support the development of techniques to nondestructively image microstructures of materials without the problems of chemical changes or damage. In this paper, we will demonstrate the capability of IOQT in reconstruction of an image from four projections. The advantage of IOQT in using a limited number of arbitrary-angled projections and the possibility of modification of IOQT are also mentioned.
Li, Xue-feng; Li, Yun-xiao; Xu, Zhen-qiu; Meng, Jin; Yan, Ming; Jin, Rui-ting; Xiao, Wei
2015-08-01
To determine the optimum process conditions for dry granulating technique of Qibai Pingfei granule, granule excipient type, rolling wheel speed and pressure and feeding speed were studied. Taking shaping rate at a time, moisture absorption and dissolubility as index, the type and amount of granule excipient were determined. In addition, taking shaping rate at a time as index, parameters of rolling wheel speed and pressure and feeding speed were researched through single factor test and response surface methodology. The optimum parameters were as follows: lactose as excipient, dry extract powder to excipient at 1:2, rolling wheel speed and pressure at 10.9 Hz and 6.4 MPa and feeding speed at 7.2 Hz. After validation of three batches pilot-scale production, the optimum processing parameters for dry granulating technique of Qibai Pingfei granule is reasonable and feasible, which can provide reliable basis for production. PMID:26677695
Naraghi, Mohsen; Tabatabaii Mohammadi, Sayed Ziaeddin; Sontou, Alain Fabrice; Farajzadeh Deroee, Armin; Boroojerdi, Masoud
2012-05-01
Endonasal endoscopic dacryocystorhinostomy (EEDCR) has been popularized as a minimally invasive technique. Although preliminary reports revealed less success in comparison with external approaches, recent endonasal endoscopic surgeries on various types of DCR have preserved advantages of this technique while diminishing the failures. We described our experience on EEDCR, including the main advantages and disadvantages of it. Hundred consecutive cases of lachrymal problems underwent EEDCR utilizing simple punch removal of bone, instead of powered instrumentation or lasers. The medial aspect of the sac was removed in all of patients, while preserving normal mucosa around the sac. Hundred cases of EEDCR were performed on 81 patients, with 19 bilateral procedures. Nine procedures were performed under local anesthesia. Based on a mean 14 months follow-up, 95 cases were free of symptoms, revealing 95% success rate. The punch technique diminishes the expenses of powered or laser instrumentation with comparable results. It seems that preserving normal tissues and creating a patent rhinostomy with least surgical trauma and less subsequent scar, plays the most important role in achieving desirable results. PMID:22065173
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP. PMID:26890944
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through
Application of optimization techniques to near terminal area sequencing and flow control.
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Park, S. K.; Hogge, J. E.
1972-01-01
Development of an arrival air-traffic management system for a single runway. Traffic is segregated throughout most of the near terminal area according to performance characteristics. Nominal approach routes for each class of aircraft are determined by an optimization procedure. In this fashion, the nominal approach routes are dependent upon and, hence, determined by the near terminal area operating capabilities of each class of aircraft. The landing order and spacing of aircraft on the common approach path are determined so that a measure of total system deviation from the nominal landing times is minimized and safety standards are met. Delay maneuvers required to satisfy sequencing needs are then carried out in a manner dependent upon the particular class of aircraft being maneuvered. Finally, results are presented to illustrate the effects of the rate of arrivals upon a one-runway system serving three different classes of aircraft employing several different sequencing strategies and measures of total system deviation.
Compiler optimization technique for data cache prefetching using a small CAM array
Chi, C.H.
1994-12-31
With advances in compiler optimization and program flow analysis, software assisted cache prefetching schemes using PREFETCH instructions are now possible. Although data can be prefetched accurately into the cache, the runtime overhead associated with these schemes often limits their practical use. In this paper, we propose a new scheme, called the Strike-CAM Data Prefetching (SCP), to prefetch array references with constant strides accurately. Compared to current software assisted data prefetching schemes, the SCP scheme has much lower runtime overhead without sacrificing prefetching accuracy. Our result showed that the SCP scheme is particularly suitable for computing intensive scientific applications where cache misses are mainly due to array references with constant strides and they can be prefetched very accurately by this SCP scheme.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
Optimization of dam-and-fill technique for sensor packaging applications
NASA Astrophysics Data System (ADS)
Kharbanda, D. K.; Khanna, P. K.
2016-04-01
Packaging of various sensors viz. ISFET, EGFET, pressure sensor etc. requires encapsulation of the device excluding the sensing area. ISFET and EGFET devices are used for pH monitoring applications. An appropriate opening in the sensing area is desirable for detection of pH. The remaining area of the device is encapsulated to prevent any shorting and also for protection of the sensor from external environment. Curing profile of potting compound was analysed to optimize the contour opening as per the requirement. Experiments for stepped and staged curing were also performed for enhancement in the shape of contour. Sensing area with appropriate opening and controlled contour was achieved for ISFET and EGFET devices.
Monticelli, D; Ciceri, E; Dossi, C
2007-07-01
A new automated batch method for the determination of ultratrace metals (nanogram per liter level) was developed and validated. Instrumental and chemical parameters affecting the performance of the method were carefully assessed and optimized. A wide range of voltammetric methods under different chemical conditions were tested. Cadmium, lead and copper were determined by anodic stripping voltammetry (ASV), while nickel, cobalt, rhodium and uranium by adsorptive cathodic stripping voltammetry (AdCSV). The figures of merit of all of these methods were determined: very good precision and accuracy were achieved, e.g. relative percentage standard deviation in the 4-13% for ASV and 2-5% for AdCSV. The stripping methods were applied to the determination of cadmium, lead, copper, nickel, cobalt, rhodium and uranium in lake water samples and the results were found to be comparable with ICP-MS data. PMID:17586114
Simunek, J.; Nimmo, J.R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.
Han, Yan-Quan; Hong, Yan; Xia, Lun-Zhu; Gao, Jia-Rong; Wang, Yong-Zhong; Sun, Yan-Hua; Yi, Jin-Hai
2014-04-01
The experiment's aim was to optimize the processing technology of Xanthii Fructus which through comparing the difference of UPLC fingerprint and contents of toxicity ingredient in water extract of 16 batches of processed sample. The determination condition of UPLC chromatographic and contents of toxicity ingredient were as follows. UPLC chromatographic: ACQUITY BEH C18 column (2.1 mm x 100 mm, 1.7 microm) eluted with the mobile phases of acetonitrile and 0.1% phosphoric acidwater in gradient mode, the flow rate was 0.25 mL x min(-1) and the detection wavelength was set at 327 nm. Contents of toxicity ingredient: Agilent TC-C18 column (4.6 mm x 250 mm, 5 microm), mobile phase was methanol-0.01 mol x L(-1) sodium dihydrogen phosphate (35: 65), flow rate was 1.0 mL x min(-1), and detection wavelength was 203 nm. The chromatographic fingerprints 16 batches of samples were analyzed in using the similarity evaluation system of chromatographic, fingerprint of traditional Chinese medicine, SPSS16.0 and SIMCA13.0 software, respectively. The similarity degrees of the 16 batches samples were more than 0.97, all the samples were classified into four categories, and the PCA showed that the peak area of chlorogenic acid, 3,5-dicaffeoylquinic acid and caffeic acid were significantly effect index in fingerprint of processed Xanthii Fructus sample. The outcome of determination showed that the toxicity ingredient contents of all samples reduced significantly after processing. This method can be used in optimizing the processing technology of Xanthii Fructus. PMID:25011263
Residential fuel cell energy systems performance optimization using "soft computing" techniques
NASA Astrophysics Data System (ADS)
Entchev, Evgueniy
Stationary residential and commercial fuel cell cogeneration systems have received increasing attention by the general public due to their great potential to supply both thermal and electrical loads to the dwellings. The reported number of field demonstration trials with grid connected and off-grid applications are under way and valuable and unique data are collected to describe the system's performance. While the single electricity mode of operation is relatively easy to introduce, it is characterized with relatively low efficiency performance (20-35%). The combined heat and power generation mode is more attractive due to higher efficiency +60%, better resources and fuel utilization, and the advantage of using a compact one box/single fuel approach for supplying all energy needs of the dwellings. While commercial fuel cell cogeneration applications are easy to adopt in combined mode of operation, due to the relatively stable base power/heat load throughout the day, the residential fuel cell cogeneration systems face a different environment with uneven load, usually two peaks in the morning and in the evening and the fact that the triple load: space, water and power occur at almost the same time. In most of the cases, the fuel cell system is not able to satisfy the triple demand and additional back up heater/burner is used. The developed ''soft computing" control strategy for FC integrated systems would be able to optimize the combined system operation while satisfying combination of demands. The simulation results showed that by employing a generic fuzzy logic control strategy the management of the power supply and thermal loads could be done appropriately in an optimal way, satisfying homeowners' power and comfort needs.
NASA Astrophysics Data System (ADS)
Shahbazmohamadi, Sina; Jordan, Eric H.
2012-12-01
Creation of three-dimensional representations of surfaces from images taken at two or more view angles is a well-established technique applied to optical images and is frequently used in combination with scanning electron microscopy (SEM). The present work describes specific steps taken to optimize and enhance the repeatability of three-dimensional surfaces reconstructed from SEM images. The presented steps result in an approximately tenfold improvement in the repeatability of the surface reconstruction compared to more standard techniques. The enhanced techniques presented can be used with any SEM friendly samples. In this work the modified technique was developed in order to accurately quantify surface geometry changes in metallic bond coats used with thermal barrier coatings (TBCs) to provide improved turbine hot part durability. Bond coat surfaces are quite rough, and accurate determination of surface geometry change (rumpling) requires excellent repeatability. Rumpling is an important contributor to TBC failure, and accurate quantification of rumpling is important to better understanding of the failure behavior of TBCs.
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.; Tiffany, S. H.
1984-01-01
The design of a candidate flutter suppression (FS) control law for the symmetric degrees of freedom for the DAST ARW-2 aircraft is discussed. The results illustrate the application of several currently employed control law design techniques. Subsequent designs, obtained as the mathematical model of the ARW-2 is updated, are expected to employ similar methods and to provide a control law whose performance will be flight tested. This study represents one of the steps necessary to provide an assessment of the validity of applying current control law synthesis and analysis techniques in the design of actively controlled aircraft. Mathematical models employed in the control law design and evaluation phases are described. The control problem is specified by presenting the flutter boundary predicted for the uncontrolled aircraft and by defining objectives and constraints that the controller should satisfy. A full-order controller is obtained by using Linear Quadratic Gaussian (LQG) techniques. The process of obtaining an implementable reduced-order controller is described. One example is also shown in which constrained optimization techniques are utilized to explicitly include robustness criteria within the design algorithm.
Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.
2015-01-01
Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.; Kvaternik, Raymond G.
1991-01-01
A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.
Emission-rotation correlation in pulsars: new discoveries with optimal techniques
NASA Astrophysics Data System (ADS)
Brook, P. R.; Karastergiou, A.; Johnston, S.; Kerr, M.; Shannon, R. M.; Roberts, S. J.
2016-02-01
Pulsars are known to display short-term variability. Recently, examples of longer term emission variability have emerged that are often correlated with changes in the rotational properties of the pulsar. To further illuminate this relationship, we have developed techniques to identify emission and rotation variability in pulsar data, and determine correlations between the two. Individual observations may be too noisy to identify subtle changes in the pulse profile. We use Gaussian process (GP) regression to model noisy observations and produce a continuous map of pulse profile variability. Generally, multiple observing epochs are required to obtain the pulsar spin frequency derivative (dot{ν }). GP regression is, therefore, also used to obtain dot{ν }, under the hypothesis that pulsar timing noise is primarily caused by unmodelled changes in dot{ν }. Our techniques distinguish between two types of variability: changes in the total flux density versus changes in the pulse shape. We have applied these techniques to 168 pulsars observed by the Parkes radio telescope, and see that although variations in flux density are ubiquitous, substantial changes in the shape of the pulse profile are rare. We reproduce previously published results and present examples of profile shape changing in seven pulsars; in particular, a clear new example of correlated changes in profile shape and rotation is found in PSR J1602-5100. In the shape changing pulsars, a more complex picture than the previously proposed two state model emerges. We conclude that our simple assumption that all timing noise can be interpreted as dot{ν } variability is insufficient to explain our data set.
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.
NASA Technical Reports Server (NTRS)
1971-01-01
Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.
Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun
2012-01-01
How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961
NASA Astrophysics Data System (ADS)
Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong
2012-05-01
Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.
Beyth, Y.; Navot, D.; Lax, E.
1985-10-01
A simple technique is reported in which oil-soluble contrast media (OSCM) are used with hysterosalpingography to investigate infertility in women due to uterine and tubal pathology. The advantages of OSCM as compared with water-soluble contrast media (WSCM) are described. Complications caused by intravasation of the OSCM into lymph vessels and veins are avoided by clearing the media at the end of the procedure. This also results in the immediate spread of the contrast media in the pelvic cavity with the result that delayed radiographs become superfluous and the radiation dose to the genitals is reduced. 4 references, 2 figures.
Minimizing occupational and patient radiation exposure using an optimized injection technique
Holly, A.S.; Stumpf, K.D.; Ortendahl, D.A.; Hattner, R.S.
1987-12-01
In an attempt to lower whole-body and hand radiation exposure to the technologist and decrease the number of infiltrated doses, this laboratory instituted a cold start method for radionuclide injection. The authors then compared the radiation dosimetry readings for a period before and after instituting the method. The finger ring exposures and whole-body exposures were compared. The exposure to the technologist's hands was reduced by 56% and to the technologist's body by 28%. Detectable extravasation of the dose was reduced from 64% to 9%. The recommend the use of this technique for all nuclear medicine departments.
Optimization of Coronary Whole-Heart MRA Free Breathing Technique at 3T
Gharib, Ahmed M.; Abd-Elmoniem, Khaled Z.; Herzka, Daniel A.; Ho, Vincent B.; Locklin, Julie; Tzatha, Efstathia; Stuber, Matthias; Pettigrew, Roderic I
2011-01-01
Four different techniques for 3T whole-heart coronary MRA using free-breathing 3D segmented parallel imaging and adiabatic T2-Prep were assessed. Coronary MRA at 3T is improved by shortening the acquisition window more than employing the highest spatial resolution. Double oblique whole-heart acquisitions result in better overall image quality and allow for better delineation of the LAD. It is possible to attain shorter acquisition windows and a smaller voxel size at 3T than previously reported at 1.5T. PMID:21871751
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Bayesian network structure learning based on the chaotic particle swarm optimization algorithm.
Zhang, Q; Li, Z; Zhou, C J; Wei, X P
2013-01-01
The Bayesian network (BN) is a knowledge representation form, which has been proven to be valuable in the gene regulatory network reconstruction because of its capability of capturing causal relationships between genes. Learning BN structures from a database is a nondeterministic polynomial time (NP)-hard problem that remains one of the most exciting challenges in machine learning. Several heuristic searching techniques have been used to find better network structures. Among these algorithms, the classical K2 algorithm is the most successful. Nonetheless, the performance of the K2 algorithm is greatly affected by a prior ordering of input nodes. The proposed method in this paper is based on the chaotic particle swarm optimization (CPSO) and the K2 algorithm. Because the PSO algorithm completely entraps the local minimum in later evolutions, we combined the PSO algorithm with the chaos theory, which has the properties of ergodicity, randomness, and regularity. Experimental results show that the proposed method can improve the convergence rate of particles and identify networks more efficiently and accurately. PMID:24222226
Scheib, Stacey A; Tanner, Edward; Green, Isabel C; Fader, Amanda N
2014-01-01
The objectives of this review were to analyze the literature describing the benefits of minimally invasive gynecologic surgery in obese women, to examine the physiologic considerations associated with obesity, and to describe surgical techniques that will enable surgeons to perform laparoscopy and robotic surgery successfully in obese patients. The Medline database was reviewed for all articles published in the English language between 1993 and 2013 containing the search terms "gynecologic laparoscopy" "laparoscopy," "minimally invasive surgery and obesity," "obesity," and "robotic surgery." The incidence of obesity is increasing in the United States, and in particular morbid obesity in women. Obesity is associated with a wide range of comorbid conditions that may affect perioperative outcomes including hypertension, atherosclerosis, angina, obstructive sleep apnea, and diabetes mellitus. In obese patients, laparoscopy or robotic surgery, compared with laparotomy, is associated with a shorter hospital stay, less postoperative pain, and fewer wound complications. Specific intra-abdominal access and trocar positioning techniques, as well as anesthetic maneuvers, improve the likelihood of success of laparoscopy in women with central adiposity. Performing gynecologic laparoscopy in the morbidly obese is no longer rare. Increases in the heaviest weight categories involve changes in clinical practice patterns. With comprehensive and thoughtful preoperative and surgical planning, minimally invasive gynecologic surgery may be performed safely and is of particular benefit in obese patients. PMID:24100146
NASA Astrophysics Data System (ADS)
Teo, Stephanie M.; Ofori-Okai, Benjamin K.; Werley, Christopher A.; Nelson, Keith A.
2015-05-01
Multidimensional spectroscopy at visible and infrared frequencies has opened a window into the transfer of energy and quantum coherences at ultrafast time scales. For these measurements to be performed in a manageable amount of time, one spectral axis is typically recorded in a single laser shot. An analogous rapid-scanning capability for THz measurements will unlock the multidimensional toolkit in this frequency range. Here, we first review the merits of existing single-shot THz schemes and discuss their potential in multidimensional THz spectroscopy. We then introduce improved experimental designs and noise suppression techniques for the two most promising methods: frequency-to-time encoding with linear spectral interferometry and angle-to-time encoding with dual echelons. Both methods, each using electro-optic detection in the linear regime, were able to reproduce the THz temporal waveform acquired with a traditional scanning delay line. Although spectral interferometry had mediocre performance in terms of signal-to-noise, the dual echelon method was easily implemented and achieved the same level of signal-to-noise as the scanning delay line in only 4.5% of the laser pulses otherwise required (or 22 times faster). This reduction in acquisition time will compress day-long scans to hours and hence provides a practical technique for multidimensional THz measurements.
Optimized measurement strategy for multiple-orientation technique on coordinate-measuring machines
NASA Astrophysics Data System (ADS)
Kondo, Yohan; Sasajima, Kazuyuki; Osawa, Sonko; Sato, Osamu; Watanabe, Tsukasa; Komori, Masaharu
2009-10-01
Coordinate-measuring machines (CMMs) are widely used to measure the characteristics of various geometrical features. The measurement results using CMMs include systematic errors. To eliminate the systematic errors, the multiple-orientation technique is effective for rotationally symmetric workpieces such as cylinders or gears. However, there are Fourier components of the calibration curve that cannot be analyzed on the basis of the number of orientations; therefore, the number of orientations was set to be larger than the number of required Fourier components. Such a method takes, however, a very long time and it is difficult to maintain a stable environment during the measurement. In this paper, we propose a new measurement strategy for reducing the total number of orientations by compensating the deficient Fourier components using the measurement result with another number of orientations. When the lowest common multiple of integers m and n is set to be larger than the number of required Fourier components, the calibration result can be obtained from m + n - 1 orientations. To select m and n most efficiently, the combination should not include a common prime number. The effectiveness of the combination measurement strategy for the multiple-orientation technique was demonstrated by calibrating a multiball artifact and a gear.
Teo, Stephanie M; Ofori-Okai, Benjamin K; Werley, Christopher A; Nelson, Keith A
2015-05-01
Multidimensional spectroscopy at visible and infrared frequencies has opened a window into the transfer of energy and quantum coherences at ultrafast time scales. For these measurements to be performed in a manageable amount of time, one spectral axis is typically recorded in a single laser shot. An analogous rapid-scanning capability for THz measurements will unlock the multidimensional toolkit in this frequency range. Here, we first review the merits of existing single-shot THz schemes and discuss their potential in multidimensional THz spectroscopy. We then introduce improved experimental designs and noise suppression techniques for the two most promising methods: frequency-to-time encoding with linear spectral interferometry and angle-to-time encoding with dual echelons. Both methods, each using electro-optic detection in the linear regime, were able to reproduce the THz temporal waveform acquired with a traditional scanning delay line. Although spectral interferometry had mediocre performance in terms of signal-to-noise, the dual echelon method was easily implemented and achieved the same level of signal-to-noise as the scanning delay line in only 4.5% of the laser pulses otherwise required (or 22 times faster). This reduction in acquisition time will compress day-long scans to hours and hence provides a practical technique for multidimensional THz measurements. PMID:26026507
Teo, Stephanie M.; Ofori-Okai, Benjamin K.; Werley, Christopher A.; Nelson, Keith A.
2015-05-15
Multidimensional spectroscopy at visible and infrared frequencies has opened a window into the transfer of energy and quantum coherences at ultrafast time scales. For these measurements to be performed in a manageable amount of time, one spectral axis is typically recorded in a single laser shot. An analogous rapid-scanning capability for THz measurements will unlock the multidimensional toolkit in this frequency range. Here, we first review the merits of existing single-shot THz schemes and discuss their potential in multidimensional THz spectroscopy. We then introduce improved experimental designs and noise suppression techniques for the two most promising methods: frequency-to-time encoding with linear spectral interferometry and angle-to-time encoding with dual echelons. Both methods, each using electro-optic detection in the linear regime, were able to reproduce the THz temporal waveform acquired with a traditional scanning delay line. Although spectral interferometry had mediocre performance in terms of signal-to-noise, the dual echelon method was easily implemented and achieved the same level of signal-to-noise as the scanning delay line in only 4.5% of the laser pulses otherwise required (or 22 times faster). This reduction in acquisition time will compress day-long scans to hours and hence provides a practical technique for multidimensional THz measurements.
Liu, Langechuan; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao
2014-01-01
Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016 μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an
Yao, T-T; Wang, L-K; Cheng, J-L; Hu, Y-Z; Zhao, J-H; Zhu, G-N
2015-03-01
A new approach employing a combination of pyrethroid and repellent is proposed to improve the protective efficacy of conventional pyrethroid-treated fabrics against mosquito vectors. In this context, the insecticidal and repellent efficacies of commonly used pyrethroids and repellents were evaluated by cone tests and arm-in-cage tests against Stegomyia albopicta (=Aedes albopictus) (Diptera: Culicidae). At concentrations of LD50 (estimated for pyrethroid) or ED50 (estimated for repellent), respectively, the knock-down effects of the pyrethroids or repellents were further compared. The results obtained indicated that deltamethrin and DEET were relatively more effective and thus these were selected for further study. Synergistic interaction was observed between deltamethrin and DEET at the ratios of 5 : 1, 2 : 1, 1 : 1 and 1 : 2 (but not 1 : 5). An optimal mixing ratio of 7 : 5 was then microencapsulated and adhered to fabrics using a fixing agent. Fabrics impregnated by microencapsulated mixtures gained extended washing durability compared with those treated with a conventional dipping method. Results indicated that this approach represents a promising method for the future impregnation of bednet, curtain and combat uniform materials. PMID:25429906
Optimizing human reliability: Mock-up and simulation techniques in waste management
Caccamise, D.J.; Somers, C.S.; Sebok, A.L.
1992-10-01
With the new mission at Rocky Flats to decontaminate and decommission a 40-year old nuclear weapons production facility comes many interesting new challenges for human factors engineering. Because the goal at Rocky Flats is to transform the environment, the workforce that undertakes this mission will find themselves in a state of constant change, as they respond to ever-changing task demands in a constantly evolving work place. In order to achieve the flexibility necessary under these circumstances and still maintain control of human reliability issues that exist in a hazardous, radioactive work environment, Rocky Flats developed an Engineering Mock-up and Simulation Lab to plan, design, test, and train personnel for new tasks involving hazardous materials. This presentation will describe how this laboratory is used to develop equipment, tools, work processes, and procedures to optimize human reliability concerns in the operational environment. We will discuss a particular instance in which a glovebag, large enough to house two individuals, was developed at this laboratory to protect the workers as they cleaned fissile material from building ventilation duct systems.