A Genetic Algorithm Approach to InGaP/GaAs HBT Parameter Extraction and RF Characterization
NASA Astrophysics Data System (ADS)
Li, Yiming; Cho, Yen-Yu; Wang, Chuan-Sheng; Huang, Kuen-Yu
2003-04-01
In this paper, a computational intelligence technique is applied to extract and simulate the stationary and high-frequency properties of heterojunction bipolar transistors (HBTs). A set of HBT circuit equations formulated with the Gummel-Poon model in time domain is solved with (1) the waveform relaxation (WR), (2) monotone iterative (MI) method, and (3) genetic algorithm (GA) with floating-point operators. The coupled nonlinear equations are decoupled and solved with the WR and MI methods in time domain, and the results obtained are used for the optimization of the characteristics with the GA method. The iteration can be terminated when the final convergent global solution is obtained. The time domain result is used in analyzing the property of the output third-order intercept point (OIP3) with the fast Fourier transform (FFT). Compared with the SPICE result, our simulation results demonstrate that this method is accurate and stable in high frequency simulation. This approach has practical applications in HBT characterization and radio frequency (RF) circuit optimal design.
Castellano, T.; De Palma, L.; Laneve, D.; Strippoli, V.; Cuccovilllo, A.; Prudenzano, F.; Dimiccoli, V.; Losito, O.; Prisco, R.
2015-07-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
Pérez-Castillo, Yunierkis; Lazar, Cosmin; Taminau, Jonatan; Froeyen, Mathy; Cabrera-Pérez, Miguel Ángel; Nowé, Ann
2012-09-24
Computer-aided drug design has become an important component of the drug discovery process. Despite the advances in this field, there is not a unique modeling approach that can be successfully applied to solve the whole range of problems faced during QSAR modeling. Feature selection and ensemble modeling are active areas of research in ligand-based drug design. Here we introduce the GA(M)E-QSAR algorithm that combines the search and optimization capabilities of Genetic Algorithms with the simplicity of the Adaboost ensemble-based classification algorithm to solve binary classification problems. We also explore the usefulness of Meta-Ensembles trained with Adaboost and Voting schemes to further improve the accuracy, generalization, and robustness of the optimal Adaboost Single Ensemble derived from the Genetic Algorithm optimization. We evaluated the performance of our algorithm using five data sets from the literature and found that it is capable of yielding similar or better classification results to what has been reported for these data sets with a higher enrichment of active compounds relative to the whole actives subset when only the most active chemicals are considered. More important, we compared our methodology with state of the art feature selection and classification approaches and found that it can provide highly accurate, robust, and generalizable models. In the case of the Adaboost Ensembles derived from the Genetic Algorithm search, the final models are quite simple since they consist of a weighted sum of the output of single feature classifiers. Furthermore, the Adaboost scores can be used as ranking criterion to prioritize chemicals for synthesis and biological evaluation after virtual screening experiments.
RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
A hybrid algorithm with GA and DAEM
NASA Astrophysics Data System (ADS)
Wan, HongJie; Deng, HaoJiang; Wang, XueWei
2013-03-01
Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
Hybrid binary GA-EDA algorithms for complex “black-box” optimization problems
NASA Astrophysics Data System (ADS)
Sopov, E.
2017-02-01
Genetic Algorithms (GAs) have proved their efficiency solving many complex optimization problems. GAs can be also applied for “black-box” problems, because they realize the “blind” search and do not require any specific information about features of search space and objectives. It is clear that a GA uses the “Trial-and-Error” strategy to explorer search space, and collects some statistical information that is stored in the form of genes in the population. Estimation of Distribution Algorithms (EDA) have very similar realization as GAs, but use an explicit representation of search experience in the form of the statistical probabilities distribution. In this study we discus some approaches for improving the standard GA performance by combining the binary GA with EDA. Finally, a novel approach for the large-scale global optimization is proposed. The experimental results and comparison with some well-studied techniques are presented and discussed.
Ameliorated GA approach for base station planning
NASA Astrophysics Data System (ADS)
Wang, Andong; Sun, Hongyue; Wu, Xiaomin
2011-10-01
In this paper, we aim at locating base station (BS) rationally to satisfy the most customs by using the least BSs. An ameliorated GA is proposed to search for the optimum solution. In the algorithm, we mesh the area to be planned according to least overlap length derived from coverage radius, bring into isometric grid encoding method to represent BS distribution as well as its number and develop select, crossover and mutation operators to serve our unique necessity. We also construct our comprehensive object function after synthesizing coverage ratio, overlap ratio, population and geographical conditions. Finally, after importing an electronic map of the area to be planned, a recommended strategy draft would be exported correspondingly. We eventually import HongKong, China to simulate and yield a satisfactory solution.
ASMiGA: an archive-based steady-state micro genetic algorithm.
Nag, Kaustuv; Pal, Tandra; Pal, Nikhil R
2015-01-01
We propose a new archive-based steady-state micro genetic algorithm (ASMiGA). In this context, a new archive maintenance strategy is proposed, which maintains a set of nondominated solutions in the archive unless the archive size falls below a minimum allowable size. It makes the archive size adaptive and dynamic. We have proposed a new environmental selection strategy and a new mating selection strategy. The environmental selection strategy reduces the exploration in less probable objective spaces. The mating selection increases searching in more probable search regions by enhancing the exploitation of existing solutions. A new crossover strategy DE-3 is proposed here. ASMiGA is compared with five well-known multiobjective optimization algorithms of different types-generational evolutionary algorithms (SPEA2 and NSGA-II), archive-based hybrid scatter search, decomposition-based evolutionary approach, and archive-based micro genetic algorithm. For comparison purposes, four performance measures (HV, GD, IGD, and GS) are used on 33 test problems, of which seven problems are constrained. The proposed algorithm outperforms the other five algorithms.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi
A new theoretical approach to adsorption desorption behavior of Ga on GaAs surfaces
NASA Astrophysics Data System (ADS)
Kangawa, Y.; Ito, T.; Taguchi, A.; Shiraishi, K.; Ohachi, T.
2001-11-01
We propose a new theoretical approach for studying adsorption-desorption behavior of atoms on semiconductor surfaces. The new theoretical approach based on the ab initio calculations incorporates the free energy of gas phase; therefore we can calculate how adsorption and desorption depends on growth temperature and beam equivalent pressure (BEP). The versatility of the new theoretical approach was confirmed by the calculation of Ga adsorption-desorption transition temperatures and transition BEPs on the GaAs (0 0 1) -(4×2) β2 Ga-rich surface. This new approach is feasible to predict how adsorption and desorption depend on the growth conditions.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-04-17
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
A genetic engineering approach to genetic algorithms.
Gero, J S; Kazakov, V
2001-01-01
We present an extension to the standard genetic algorithm (GA), which is based on concepts of genetic engineering. The motivation is to discover useful and harmful genetic materials and then execute an evolutionary process in such a way that the population becomes increasingly composed of useful genetic material and increasingly free of the harmful genetic material. Compared to the standard GA, it provides some computational advantages as well as a tool for automatic generation of hierarchical genetic representations specifically tailored to suit certain classes of problems.
An Approach to Derive Parametric L-System Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Farooq, Humera; Zakaria, M. Nordin; Hassan, Mohd. Fadzil; Sulaiman, Suziah
In computer graphics, L-System is widely used to model artificial plants structures and fractals. The Genetic Algorithm (GA) is the most popular form of Evolutionary Algorithms. This paper examines a method for automatic plant modeling which is based on an integration of GA and Parametric L-System using appropriate fitness function. The approach is specifically based on the implementation of two layered GA to derive the rewriting rules of Parametric L-System. The higher level of GA deals with the evolution of symbols and lower level deals with the evolution of numerical parameters. Initial results derived from the approach are very promising, which shows that complicated branching structures can be easily derived by the multilayered architecture of GA.
A "Tuned" Mask Learnt Approach Based on Gravitational Search Algorithm.
Wan, Youchuan; Wang, Mingwei; Ye, Zhiwei; Lai, Xudong
2016-01-01
Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using "Tuned" mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, "Tuned" mask is viewed as a constrained optimization problem and the optimal "Tuned" mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA). The optimal "Tuned" mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO), and artificial immune algorithm (AIA). Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.
apGA: An adaptive parallel genetic algorithm
Liepins, G.E. ); Baluja, S. )
1991-01-01
We develop apGA, a parallel variant of the standard generational GA, that combines aggressive search with perpetual novelty, yet is able to preserve enough genetic structure to optimally solve variably scaled, non-uniform block deceptive and hierarchical deceptive problems. apGA combines elitism, adaptive mutation, adaptive exponential scaling, and temporal memory. We present empirical results for six classes of problems, including the DeJong test suite. Although we have not investigated hybrids, we note that apGA could be incorporated into other recent GA variants such as GENITOR, CHC, and the recombination stage of mGA. 12 refs., 2 figs., 2 tabs.
Naresh-Kumar, G. Trager-Cowan, C.; Vilalta-Clemente, A.; Morales, M.; Ruterana, P.; Pandey, S.; Cavallini, A.; Cavalcoli, D.; Skuridina, D.; Vogt, P.; Kneissl, M.; Behmenburg, H.; Giesen, C.; Heuken, M.; Gamarra, P.; Di Forte-Poisson, M. A.; Patriarche, G.; Vickridge, I.
2014-12-15
We report on our multi–pronged approach to understand the structural and electrical properties of an InAl(Ga)N(33nm barrier)/Al(Ga)N(1nm interlayer)/GaN(3μm)/ AlN(100nm)/Al{sub 2}O{sub 3} high electron mobility transistor (HEMT) heterostructure grown by metal organic vapor phase epitaxy (MOVPE). In particular we reveal and discuss the role of unintentional Ga incorporation in the barrier and also in the interlayer. The observation of unintentional Ga incorporation by using energy dispersive X–ray spectroscopy analysis in a scanning transmission electron microscope is supported with results obtained for samples with a range of AlN interlayer thicknesses grown under both the showerhead as well as the horizontal type MOVPE reactors. Poisson–Schrödinger simulations show that for high Ga incorporation in the Al(Ga)N interlayer, an additional triangular well with very small depth may be exhibited in parallel to the main 2–DEG channel. The presence of this additional channel may cause parasitic conduction and severe issues in device characteristics and processing. Producing a HEMT structure with InAlGaN as the barrier and AlGaN as the interlayer with appropriate alloy composition may be a possible route to optimization, as it might be difficult to avoid Ga incorporation while continuously depositing the layers using the MOVPE growth method. Our present work shows the necessity of a multicharacterization approach to correlate structural and electrical properties to understand device structures and their performance.
Economic Dispatch Using Genetic Algorithm Based Hybrid Approach
Tahir Nadeem Malik; Aftab Ahmad; Shahab Khushnood
2006-07-01
Power Economic Dispatch (ED) is vital and essential daily optimization procedure in the system operation. Present day large power generating units with multi-valves steam turbines exhibit a large variation in the input-output characteristic functions, thus non-convexity appears in the characteristic curves. Various mathematical and optimization techniques have been developed, applied to solve economic dispatch (ED) problem. Most of these are calculus-based optimization algorithms that are based on successive linearization and use the first and second order differentiations of objective function and its constraint equations as the search direction. They usually require heat input, power output characteristics of generators to be of monotonically increasing nature or of piecewise linearity. These simplifying assumptions result in an inaccurate dispatch. Genetic algorithms have used to solve the economic dispatch problem independently and in conjunction with other AI tools and mathematical programming approaches. Genetic algorithms have inherent ability to reach the global minimum region of search space in a short time, but then take longer time to converge the solution. GA based hybrid approaches get around this problem and produce encouraging results. This paper presents brief survey on hybrid approaches for economic dispatch, an architecture of extensible computational framework as common environment for conventional, genetic algorithm and hybrid approaches based solution for power economic dispatch, the implementation of three algorithms in the developed framework. The framework tested on standard test systems for its performance evaluation. (authors)
NASA Astrophysics Data System (ADS)
Igeta, Hideki; Hasegawa, Mikio
Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.
Variational approach to the calculation of gA
NASA Astrophysics Data System (ADS)
Owen, Benjamin J.; Dragos, Jack; Kamleh, Waseem; Leinweber, Derek B.; Mahbub, M. Selim; Menadue, Benjamin J.; Zanotti, James M.
2013-06-01
A long standing problem in lattice QCD has been the discrepancy between the experimental and calculated values for the axial charge of the nucleon, gA ≡GA (Q2 = 0). Though finite volume effects have been shown to be large, it has also been suggested that excited state effects may also play a significant role in suppressing the value of gA. In this work, we apply a variational method to generate operators that couple predominantly to the ground state, thus systematically removing excited state contamination from the extraction of gA. The utility and success of this approach is manifest in the early onset of ground state saturation and the early onset of a clear plateau in the correlation function ratio proportional to gA. Through a comparison with results obtained via traditional methods, we show how excited state effects can suppress gA by as much as 8% if sources are not properly tuned or source-sink separations are insufficiently large.
Cystic Lung Diseases: Algorithmic Approach.
Raoof, Suhail; Bondalapati, Praveen; Vydyula, Ravikanth; Ryu, Jay H; Gupta, Nishant; Raoof, Sabiha; Galvin, Jeff; Rosen, Mark J; Lynch, David; Travis, William; Mehta, Sanjeev; Lazzaro, Richard; Naidich, David
2016-10-01
Cysts are commonly seen on CT scans of the lungs, and diagnosis can be challenging. Clinical and radiographic features combined with a multidisciplinary approach may help differentiate among various disease entities, allowing correct diagnosis. It is important to distinguish cysts from cavities because they each have distinct etiologies and associated clinical disorders. Conditions such as emphysema, and cystic bronchiectasis may also mimic cystic disease. A simplified classification of cysts is proposed. Cysts can occur in greater profusion in the subpleural areas, when they typically represent paraseptal emphysema, bullae, or honeycombing. Cysts that are present in the lung parenchyma but away from subpleural areas may be present without any other abnormalities on high-resolution CT scans. These are further categorized into solitary or multifocal/diffuse cysts. Solitary cysts may be incidentally discovered and may be an age related phenomenon or may be a remnant of prior trauma or infection. Multifocal/diffuse cysts can occur with lymphoid interstitial pneumonia, Birt-Hogg-Dubé syndrome, tracheobronchial papillomatosis, or primary and metastatic cancers. Multifocal/diffuse cysts may be associated with nodules (lymphoid interstitial pneumonia, light-chain deposition disease, amyloidosis, and Langerhans cell histiocytosis) or with ground-glass opacities (Pneumocystis jirovecii pneumonia and desquamative interstitial pneumonia). Using the results of the high-resolution CT scans as a starting point, and incorporating the patient's clinical history, physical examination, and laboratory findings, is likely to narrow the differential diagnosis of cystic lesions considerably.
FOX-GA: a genetic algorithm for generating and analyzing battlefield courses of action.
Schlabach, J L; Hayes, C C; Goldberg, D E
1999-01-01
This paper describes FOX-GA, a genetic algorithm (GA) that generates and evaluates plans in the complex domain of military maneuver planning. FOX-GA's contributions are to demonstrate an effective application of GA technology to a complex real world planning problem, and to provide an understanding of the properties needed in a GA solution to meet the challenges of decision support in complex domains. Previous obstacles to applying GA technology to maneuver planning include the lack of efficient algorithms for determining the fitness of plans. Detailed simulations would ideally be used to evaluate these plans, but most such simulations typically require several hours to assess a single plan. Since a GA needs to quickly generate and evaluate thousands of plans, these methods are too slow. To solve this problem we developed an efficient evaluator (wargamer) that uses course-grained representations of this problem domain to allow appropriate yet intelligent trade-offs between computational efficiency and accuracy. An additional challenge was that users needed a diverse set of significantly different plan options from which to choose. Typical GA's tend to develop a group of "best" solutions that may be very similar (or identical) to each other. This may not provide users with sufficient choice. We addressed this problem by adding a niching strategy to the selection mechanism to insure diversity in the solution set, providing users with a more satisfactory range of choices. FOX-GA's impact will be in providing decision support to time constrained and cognitively overloaded battlestaff to help them rapidly explore options, create plans, and better cope with the information demands of modern warfare.
Beyond Keyframing: An Algorithmic Approach to Animation
1991-05-01
AD-A241 337 1 IIllIlt1111111 l llrl lt iltlliii Beyond Keyframing: An Algorithmic Approach to Animation A. James Stewart James F. Cremer TR 91-1207...o’ed ! L. ,: , : ...,.,c~ (, J ale; its 91-08125 oj,7 Beyond Keyframing: An Algorithmic Approach to Animation IN:h - ,A. James Stewart ’ " xJ James F...Acknowledgements This work was supported in part by NSF grant DMC 86-17355, ONR grant N0014-86K-0281 and DARPA grant N0014-88K-0591. Support for James Stewart is
The royal road for genetic algorithms: Fitness landscapes and GA performance
Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)
1991-01-01
Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.
DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.
NASA Astrophysics Data System (ADS)
Siade, A. J.; Cheng, W.; Yeh, W. W.
2010-12-01
This study optimizes observation well locations and sampling frequencies for the purpose of estimating unknown groundwater extraction in an aquifer system. Proper orthogonal decomposition (POD) is used to reduce the groundwater flow model, thus reducing the computation burden and data storage space associated with solving this problem for heavily discretized models. This reduced model can store a significant amount of system information in a much smaller reduced state vector. Along with the sensitivity equation method, the proposed approach can efficiently compute the Jacobian matrix that forms the information matrix associated with the experimental design. The criterion adopted for experimental design is the maximization of the trace of the weighted information matrix. Under certain conditions, this is equivalent to the classical A-optimality criterion established in experimental design. A genetic algorithm (GA) is used to optimize the observation well locations and sampling frequencies for maximizing the collected information from the hydraulic head sampling at the observation wells. We applied the proposed approach to a hypothetical 30,000-node groundwater aquifer system. We studied the relationship among the number of observation wells, observation well locations, sampling frequencies, and the collected information for estimating unknown groundwater extraction.
Identification of handwriting by using the genetic algorithm (GA) and support vector machine (SVM)
NASA Astrophysics Data System (ADS)
Zhang, Qigui; Deng, Kai
2016-12-01
As portable digital camera and a camera phone comes more and more popular, and equally pressing is meeting the requirements of people to shoot at any time, to identify and storage handwritten character. In this paper, genetic algorithm(GA) and support vector machine(SVM)are used for identification of handwriting. Compare with parameters-optimized method, this technique overcomes two defects: first, it's easy to trap in the local optimum; second, finding the best parameters in the larger range will affects the efficiency of classification and prediction. As the experimental results suggest, GA-SVM has a higher recognition rate.
The mGA1.0: A common LISP implementation of a messy genetic algorithm
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Kerzic, Travis
1990-01-01
Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
A novel mating approach for genetic algorithms.
Galán, Severino F; Mengshoel, Ole J; Pinter, Rafael
2013-01-01
Genetic algorithms typically use crossover, which relies on mating a set of selected parents. As part of crossover, random mating is often carried out. A novel approach to parent mating is presented in this work. Our novel approach can be applied in combination with a traditional similarity-based criterion to measure distance between individuals or with a fitness-based criterion. We introduce a parameter called the mating index that allows different mating strategies to be developed within a uniform framework: an exploitative strategy called best-first, an explorative strategy called best-last, and an adaptive strategy called self-adaptive. Self-adaptive mating is defined in the context of the novel algorithm, and aims to achieve a balance between exploitation and exploration in a domain-independent manner. The present work formally defines the novel mating approach, analyzes its behavior, and conducts an extensive experimental study to quantitatively determine its benefits. In the domain of real function optimization, the experiments show that, as the degree of multimodality of the function at hand grows, increasing the mating index improves performance. In the case of the self-adaptive mating strategy, the experiments give strong results for several case studies.
Evolutionary Algorithms Approach to the Solution of Damage Detection Problems
NASA Astrophysics Data System (ADS)
Salazar Pinto, Pedro Yoajim; Begambre, Oscar
2010-09-01
In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.
Beyond Keyframing: An Algorithmic Approach to Animation
1989-01-09
SBeyond Keyframing: An Algorithmic Approach to Animation0 A. James Stewart DTIC James F. Cremer E.ECTE Computer Science Department JUL 141989 Cornell...grant N0014-86K-0281 and DARPA grant N0014-88K-0591. Support for James Stewart is provided in part by U.S. Army Mathematical Sciences Institute grant...and Control. Addison Wesley, 1986. [Cre89] James F. Cremer. PhD thesis, Cornell University, in preparation, 1989. [CS881 James F. Cremer and A. . James
An approaching genetic algorithm for automatic beam angle selection in IMRT planning.
Lei, Jie; Li, Yongjie
2009-03-01
A method named approaching genetic algorithm (AGA) is introduced to automatically select the beam angles for intensity-modulated radiotherapy (IMRT) planning. In AGA, the best individual of the current population is found at first, and the rest of the normal individuals approach the current best one according to some specially designed rules. In the course of approaching, some better individuals may be obtained. Then, the current best individual is updated to try to approach the real best one. The approaching and updating operations of AGA replace the selection, crossover and mutation operations of the genetic algorithm (GA) completely. Using the specially designed updating strategies, AGA can recover the varieties of the population to a certain extent and retain the powerful ability of evolution, compared to GA. The beam angles are selected using AGA, followed by a beam intensity map optimization using conjugate gradient (CG). A simulated case and a clinical case with nasopharynx cancer are employed to demonstrate the feasibility of AGA. For the case investigated, AGA was feasible for the beam angle optimization (BAO) problem in IMRT planning and converged faster than GA.
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.
A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm
Wan, Youchuan; Ye, Zhiwei
2016-01-01
Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA). The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO), and artificial immune algorithm (AIA). Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy. PMID:28090204
Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
Wang, Yan; Xi, Chengyu; Zhang, Shuai; Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
NASA Astrophysics Data System (ADS)
Djeffal, F.; Lakhdar, N.; Meguellati, M.; Benhaya, A.
2009-09-01
The analytical modeling of electron mobility in wurtzite Gallium Nitride (GaN) requires several simplifying assumptions, generally necessary to lead to compact expressions of electron transport characteristics for GaN-based devices. Further progress in the development, design and optimization of GaN-based devices necessarily requires new theory and modeling tools in order to improve the accuracy and the computational time of devices simulators. Recently, the evolutionary techniques, genetic algorithms ( GA) and particle swarm optimization ( PSO), have attracted considerable attention among various heuristic optimization techniques. In this paper, a particle swarm optimizer is implemented and compared to a genetic algorithm for modeling and optimization of new closed electron mobility model for GaN-based devices design. The performance of both optimization techniques in term of computational time and convergence rate is also compared. Further, our obtained results for both techniques ( PSO and GA) are tested and compared with numerical data (Monte Carlo simulations) where a good agreement has been found for wide range of temperature, doping and applied electric field. The developed analytical models can also be incorporated into the circuits simulators to study GaN-based devices without impact on the computational time and data storage.
Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G
2014-08-18
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
New pole placement algorithm - Polynomial matrix approach
NASA Technical Reports Server (NTRS)
Shafai, B.; Keel, L. H.
1990-01-01
A simple and direct pole-placement algorithm is introduced for dynamical systems having a block companion matrix A. The algorithm utilizes well-established properties of matrix polynomials. Pole placement is achieved by appropriately assigning coefficient matrices of the corresponding matrix polynomial. This involves only matrix additions and multiplications without requiring matrix inversion. A numerical example is given for the purpose of illustration.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Saborido, Rubén; Ruiz, Ana B; Luque, Mariano
2016-02-08
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA (global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.
Band-Gap Design of Quaternary (In,Ga) (As,Sb) Semiconductors via the Inverse-Band-Structure Approach
Piquini, P.; Graf, P. A.; Zunger. A.
2008-01-01
Quaternary systems illustrated by (Ga,In)(As,Sb) manifest a huge configurational space, offering in principle the possibility of designing structures that are lattice matched to a given substrate and have given electronic properties (e.g., band gap) at more than one composition. Such specific configurations were however, hitherto, unidentified. We show here that using a genetic-algorithm search with a pseudopotential Inverse-band-structure (IBS) approach it is possible to identify those configurations that are naturally lattice matching (to GaSb) and have a specific band gap (310 meV) at more than one composition. This is done by deviating from randomness, allowing the IBS to find a partial atomic ordering. This illustrates multitarget design of the electronic structure of multinary systems.
Chen, Bor-Sen; Chen, Po-Wei
2010-01-01
In the past decade, the development of synthetic gene networks has attracted much attention from many researchers. In particular, the genetic oscillator known as the repressilator has become a paradigm for how to design a gene network with a desired dynamic behaviour. Even though the repressilator can show oscillatory properties in its protein concentrations, their amplitudes, frequencies and phases are perturbed by the kinetic parametric fluctuations (intrinsic molecular perturbations) and external disturbances (extrinsic molecular noises) of the environment. Therefore, how to design a robust genetic oscillator with desired amplitude, frequency and phase under stochastic intrinsic and extrinsic molecular noises is an important topic for synthetic biology. In this study, based on periodic reference signals with arbitrary amplitudes, frequencies and phases, a robust synthetic gene oscillator is designed by tuning the kinetic parameters of repressilator via a genetic algorithm (GA) so that the protein concentrations can track the desired periodic reference signals under intrinsic and extrinsic molecular noises. GA is a stochastic optimization algorithm which was inspired by the mechanisms of natural selection and evolution genetics. By the proposed GA-based design algorithm, the repressilator can track the desired amplitude, frequency and phase of oscillation under intrinsic and extrinsic noises through the optimization of fitness function. The proposed GA-based design algorithm can mimic the natural selection in evolutionary process to select adequate kinetic parameters for robust genetic oscillators. The design method can be easily extended to any synthetic gene network design with prescribed behaviours. PMID:20535234
Probing genetic algorithms for feature selection in comprehensive metabolic profiling approach.
Zou, Wei; Tolstikov, Vladimir V
2008-04-01
Six different clones of 1-year-old loblolly pine (Pinus taeda L.) seedlings grown under standardized conditions in a green house were used for sample preparation and further analysis. Three independent and complementary analytical techniques for metabolic profiling were applied in the present study: hydrophilic interaction chromatography (HILIC-LC/ESI-MS), reversed-phase liquid chromatography (RP-LC/ESI-MS), and gas chromatography all coupled to mass spectrometry (GC/TOF-MS). Unsupervised methods, such as principle component analysis (PCA) and clustering, and supervised methods, such as classification, were used for data mining. Genetic algorithms (GA), a multivariate approach, was probed for selection of the smallest subsets of potentially discriminative classifiers. From more than 2000 peaks found in total, small subsets were selected by GA as highly potential classifiers allowing discrimination among six investigated genotypes. Annotated GC/TOF-MS data allowed the generation of a small subset of identified metabolites. LC/ESI-MS data and small subsets require further annotation. The present study demonstrated that combination of comprehensive metabolic profiling and advanced data mining techniques provides a powerful metabolomic approach for biomarker discovery among small molecules. Utilizing GA for feature selection allowed the generation of small subsets of potent classifiers.
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
NASA Astrophysics Data System (ADS)
Subramanian, Nithya
Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of
NASA Astrophysics Data System (ADS)
Aleardi, Mattia
2015-06-01
Predicting missing log data is a useful capability for geophysicists. Geophysical measurements in boreholes are frequently affected by gaps in the recording of one or more logs. In particular, sonic and shear sonic logs are often recorded over limited intervals along the well path, but the information these logs contain is crucial for many geophysical applications. Estimating missing log intervals from a set of recorded logs is therefore of great interest. In this work, I propose to estimate the data in missing parts of velocity logs using a genetic algorithm (GA) optimisation and I demonstrate that this method is capable of extracting linear or exponential relations that link the velocity to other available logs. The technique was tested on different sets of logs (gamma ray, resistivity, density, neutron, sonic and shear sonic) from three wells drilled in different geological settings and through different lithologies (sedimentary and intrusive rocks). The effectiveness of this methodology is demonstrated by a series of blind tests and by evaluating the correlation coefficients between the true versus predicted velocity values. The combination of GA optimisation with a Gibbs sampler (GS) and subsequent Monte Carlo simulations allows the uncertainties in the final predicted velocities to be reliably quantified. The GA method is also compared with the neural networks (NN) approach and classical multilinear regression. The comparisons show that the GA, NN and multilinear methods provide velocity estimates with the same predictive capability when the relation between the input logs and the seismic velocity is approximately linear. The GA and NN approaches are more robust when the relations are non-linear. However, in all cases, the main advantages of the GA optimisation procedure over the NN approach is that it directly provides an interpretable and simple equation that relates the input and predicted logs. Moreover, the GA method is not affected by the disadvantages
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
NASA Technical Reports Server (NTRS)
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Application of the modelling power approach to variable subset selection for GA-PLS QSAR models.
Sagrado, Salvador; Cronin, Mark T D
2008-02-25
A previously developed function, the Modelling Power Plot, has been applied to QSARs developed using partial least squares (PLS) following variable selection from a genetic algorithm (GA). Modelling power (Mp) integrates the predictive and descriptive capabilities of a QSAR. With regard to QSARs for narcotic toxic potency, Mp was able to guide the optimal selection of variables using a GA. The results emphasise the importance of Mp to assess the success of the variable selection and that techniques such as PLS are more robust following variable selection.
A new algorithmic approach for fingers detection and identification
NASA Astrophysics Data System (ADS)
Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad
2013-03-01
Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.
Identification of Quasi-ARX Neurofuzzy Model with an SVR and GA Approach
NASA Astrophysics Data System (ADS)
Cheng, Yu; Wang, Lan; Hu, Jinglu
The quasi-ARX neurofuzzy (Q-ARX-NF) model has shown great approximation ability and usefulness in nonlinear system identification and control. It owns an ARX-like linear structure, and the coefficients are expressed by an incorporated neurofuzzy (InNF) network. However, the Q-ARX-NF model suffers from curse-of-dimensionality problem, because the number of fuzzy rules in the InNF network increases exponentially with input space dimension. It may result in high computational complexity and over-fitting. In this paper, the curse-of-dimensionality is solved in two ways. Firstly, a support vector regression (SVR) based approach is used to reduce computational complexity by a dual form of quadratic programming (QP) optimization, where the solution is independent of input dimensions. Secondly, genetic algorithm (GA) based input selection is applied with a novel fitness evaluation function, and a parsimonious model structure is generated with only important inputs for the InNF network. Mathematical and real system simulations are carried out to demonstrate the effectiveness of the proposed method.
Computational identification of human long intergenic non-coding RNAs using a GA-SVM algorithm.
Wang, Yanqiu; Li, Yang; Wang, Qi; Lv, Yingli; Wang, Shiyuan; Chen, Xi; Yu, Xuexin; Jiang, Wei; Li, Xia
2014-01-01
Long intergenic non-coding RNAs (lincRNAs) are a new type of non-coding RNAs and are closely related with the occurrence and development of diseases. In previous studies, most lincRNAs have been identified through next-generation sequencing. Because lincRNAs exhibit tissue-specific expression, the reproducibility of lincRNA discovery in different studies is very poor. In this study, not including lincRNA expression, we used the sequence, structural and protein-coding potential features as potential features to construct a classifier that can be used to distinguish lincRNAs from non-lincRNAs. The GA-SVM algorithm was performed to extract the optimized feature subset. Compared with several feature subsets, the five-fold cross validation results showed that this optimized feature subset exhibited the best performance for the identification of human lincRNAs. Moreover, the LincRNA Classifier based on Selected Features (linc-SF) was constructed by support vector machine (SVM) based on the optimized feature subset. The performance of this classifier was further evaluated by predicting lincRNAs from two independent lincRNA sets. Because the recognition rates for the two lincRNA sets were 100% and 99.8%, the linc-SF was found to be effective for the prediction of human lincRNAs.
Investigation of new approaches for InGaN growth with high indium content for CPV application
Arif, Muhammad; Salvestrini, Jean Paul; Sundaram, Suresh; Streque, Jérémy; Gmili, Youssef El; Puybaret, Renaud; Voss, Paul L.; Belahsene, Sofiane; Ramdane, Abderahim; Martinez, Anthony; Patriarche, Gilles; Fix, Thomas; Slaoui, Abdelillah; Ougazzaden, Abdallah
2015-09-28
We propose to use two new approaches that may overcome the issues of phase separation and high dislocation density in InGaN-based PIN solar cells. The first approach consists in the growth of a thick multi-layered InGaN/GaN absorber. The periodical insertion of the thin GaN interlayers should absorb the In excess and relieve compressive strain. The InGaN layers need to be thin enough to remain fully strained and without phase separation. The second approach consists in the growth of InGaN nano-structures for the achievement of high In content thick InGaN layers. It allows the elimination of the preexisting dislocations in the underlying template. It also allows strain relaxation of InGaN layers without any dislocations, leading to higher In incorporation and reduced piezo-electric effect. The two approaches lead to structural, morphological, and luminescence properties that are significantly improved when compared to those of thick InGaN layers. Corresponding full PIN structures have been realized by growing a p-type GaN layer on the top the half PIN structures. External quantum efficiency, electro-luminescence, and photo-current characterizations have been carried out on the different structures and reveal an enhancement of the performances of the InGaN PIN PV cells when the thick InGaN layer is replaced by either InGaN/GaN multi-layered or InGaN nanorod layer.
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Fernandez, Michael; Caballero, Julio; Fernandez, Leyden; Sarai, Akinori
2011-02-01
Many articles in "in silico" drug design implemented genetic algorithm (GA) for feature selection, model optimization, conformational search, or docking studies. Some of these articles described GA applications to quantitative structure-activity relationships (QSAR) modeling in combination with regression and/or classification techniques. We reviewed the implementation of GA in drug design QSAR and specifically its performance in the optimization of robust mathematical models such as Bayesian-regularized artificial neural networks (BRANNs) and support vector machines (SVMs) on different drug design problems. Modeled data sets encompassed ADMET and solubility properties, cancer target inhibitors, acetylcholinesterase inhibitors, HIV-1 protease inhibitors, ion-channel and calcium entry blockers, and antiprotozoan compounds as well as protein classes, functional, and conformational stability data. The GA-optimized predictors were often more accurate and robust than previous published models on the same data sets and explained more than 65% of data variances in validation experiments. In addition, feature selection over large pools of molecular descriptors provided insights into the structural and atomic properties ruling ligand-target interactions.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
NASA Astrophysics Data System (ADS)
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.
Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.
ERIC Educational Resources Information Center
Halavin, James; Sommer, Charles
Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…
Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
NASA Astrophysics Data System (ADS)
Song, Kaishan; Li, Lin; Li, Shuai; Tedesco, Lenore; Hall, Bob; Li, Zuchuan
2012-08-01
Eagle Creek, Morse and Geist reservoirs, drinking water supply sources for the Indianapolis, Indiana, USA metropolitan region, are experiencing nuisance cyanobacterial blooms. Hyperspectral remote sensing has been proven to be an effective tool for phycocyanin (C-PC) concentration retrieval, a proxy pigment unique to cyanobacteria in freshwater ecosystems. An adaptive model based on genetic algorithm and partial least squares (GA-PLS), together with three-band algorithm (TBA) and other band ratio algorithms were applied to hyperspectral data acquired from in situ (ASD spectrometer) and airborne (AISA sensor) platforms. The results indicated that GA-PLS achieved high correlation between measured and estimated C-PC for GR (RMSE = 16.3 μg/L, RMSE% = 18.2; range (R): 2.6-185.1 μg/L), MR (RMSE = 8.7 μg/L, RMSE% = 15.6; R: 3.3-371.0 μg/L) and ECR (RMSE = 19.3 μg/L, RMSE% = 26.4; R: 0.7-245.0 μg/L) for the in situ datasets. TBA also performed well compared to other band ratio algorithms due to its optimal band tuning process and the reduction of backscattering effects through the third band. GA-PLS (GR: RMSE = 24.1 μg/L, RMSE% = 25.2, R: 25.2-185.1 μg/L; MR: RMSE = 15.7 μg/L, RMSE% = 37.4, R: 2.0-135.1 μg/L) and TBA (GR: RMSE = 28.3 μg/L, RMSE% = 30.1; MR: RMSE = 17.7 μg/L, RMSE% = 41.9) methods results in somewhat lower accuracy using AISA imagery data, which is likely due to atmospheric correction or radiometric resolution. GA-PLS (TBA) obtained an RMSE of 24.82 μg/L (35.8 μg/L), and RMSE% of 31.24 (43.5) between measured and estimated C-PC for aggregated datasets. C-PC maps were generated through GA-PLS using AISA imagery data. The C-PC concentration had an average value of 67.31 ± 44.23 μg/L in MR with a large range of concentration, while the GR had a higher average value 103.17 ± 33.45 μg/L.
Stall Recovery Guidance Algorithms Based on Constrained Control Approaches
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana
2016-01-01
Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
NASA Astrophysics Data System (ADS)
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
Erguzel, Turker Tekin; Ozekes, Serhat; Tan, Oguz; Gultekin, Selahattin
2015-10-01
Feature selection is an important step in many pattern recognition systems aiming to overcome the so-called curse of dimensionality. In this study, an optimized classification method was tested in 147 patients with major depressive disorder (MDD) treated with repetitive transcranial magnetic stimulation (rTMS). The performance of the combination of a genetic algorithm (GA) and a back-propagation (BP) neural network (BPNN) was evaluated using 6-channel pre-rTMS electroencephalographic (EEG) patterns of theta and delta frequency bands. The GA was first used to eliminate the redundant and less discriminant features to maximize classification performance. The BPNN was then applied to test the performance of the feature subset. Finally, classification performance using the subset was evaluated using 6-fold cross-validation. Although the slow bands of the frontal electrodes are widely used to collect EEG data for patients with MDD and provide quite satisfactory classification results, the outcomes of the proposed approach indicate noticeably increased overall accuracy of 89.12% and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.904 using the reduced feature set.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Fabrication of normally-off GaN nanowire gate-all-around FET with top-down approach
NASA Astrophysics Data System (ADS)
Im, Ki-Sik; Won, Chul-Ho; Vodapally, Sindhuri; Caulmilone, Raphaël; Cristoloveanu, Sorin; Kim, Yong-Tae; Lee, Jung-Hee
2016-10-01
Lateral GaN nanowire gate-all-around transistor has been fabricated with top-down process and characterized. A triangle-shaped GaN nanowire with 56 nm width was implemented on the GaN-on-insulator (GaNOI) wafer by utilizing (i) buried oxide as sacrificial layer and (ii) anisotropic lateral wet etching of GaN in tetramethylammonium hydroxide solution. During subsequent GaN and AlGaN epitaxy of source/drain planar regions, no growth occurred on the nanowire, due to self-limiting growth property. Transmission electron microscopy and energy-dispersive X-ray spectroscopy elemental mapping reveal that the GaN nanowire consists of only Ga and N atoms. The transistor exhibits normally-off operation with the threshold voltage of 3.5 V and promising performance: the maximum drain current of 0.11 mA, the maximum transconductance of 0.04 mS, the record off-state leakage current of ˜10-13 A/mm, and a very high Ion/Ioff ratio of 108. The proposed top-down device concept using the GaNOI wafer enables the fabrication of multiple parallel nanowires with positive threshold voltage and is advantageous compared with the bottom-up approach.
A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition.
Mistry, Kamlesh; Zhang, Li; Neoh, Siew Chin; Lim, Chee Peng; Fielding, Ben
2016-04-21
This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.
In silico prediction of mitochondrial toxicity by using GA-CG-SVM approach.
Zhang, Hui; Chen, Qing-Yi; Xiang, Ming-Li; Ma, Chang-Ying; Huang, Qi; Yang, Sheng-Yong
2009-02-01
Drug-induced mitochondrial toxicity has become one of the key reasons for which some drugs fail to enter market or are withdrawn from market. Thus early identification of new chemical entities that injure mitochondrial function grows to be very necessary to produce safer drugs and directly reduce attrition rate in later stages of drug development. In this study, support vector machine (SVM) method combined with genetic algorithm (GA) for feature selection and conjugate gradient method (CG) for parameter optimization (GA-CG-SVM), has been employed to develop prediction model of mitochondrial toxicity. We firstly collected 288 compounds, including 171 MT+ and 117 MT-, from different literature resources. Then these compounds were randomly separated into a training set (253 compounds) and a test set (35 compounds). The overall prediction accuracy for the training set by means of 5-fold cross-validation is 84.59%. Further, the SVM model was evaluated by using the independent test set. The overall prediction accuracy for the test set is 77.14%. These clearly indicate that the mitochondrial toxicity is predictable. Meanwhile impacts of the feature selection and SVM parameter optimization on the quality of SVM model were also examined and discussed. The results implicate the potential of the proposed GA-CG-SVM in facilitating the prediction of mitochondrial toxicity.
Side-locked headaches: an algorithm-based approach.
Prakash, Sanjay; Rathore, Chaturbhuj
2016-12-01
The differential diagnosis of strictly unilateral hemicranial pain includes a large number of primary and secondary headaches and cranial neuropathies. It may arise from both intracranial and extracranial structures such as cranium, neck, vessels, eyes, ears, nose, sinuses, teeth, mouth, and the other facial or cervical structure. Available data suggest that about two-third patients with side-locked headache visiting neurology or headache clinics have primary headaches. Other one-third will have either secondary headaches or neuralgias. Many of these hemicranial pain syndromes have overlapping presentations. Primary headache disorders may spread to involve the face and / or neck. Even various intracranial and extracranial pathologies may have similar overlapping presentations. Patients may present to a variety of clinicians, including headache experts, dentists, otolaryngologists, ophthalmologist, psychiatrists, and physiotherapists. Unfortunately, there is not uniform approach for such patients and diagnostic ambiguity is frequently encountered in clinical practice.Herein, we review the differential diagnoses of side-locked headaches and provide an algorithm based approach for patients presenting with side-locked headaches. Side-locked headache is itself a red flag. So, the first priority should be to rule out secondary headaches. A comprehensive history and thorough examinations will help one to formulate an algorithm to rule out or confirm secondary side-locked headaches. The diagnoses of most secondary side-locked headaches are largely investigations dependent. Therefore, each suspected secondary headache should be subjected for appropriate investigations or referral. The diagnostic approach of primary side-locked headache starts once one rule out all the possible secondary headaches. We have discussed an algorithmic approach for both secondary and primary side-locked headaches.
NASA Astrophysics Data System (ADS)
Hung, Ching-Wen; Chang, Ching-Hong; Chen, Wei-Cheng; Chen, Chun-Chia; Chen, Huey-Ing; Tsai, Yu-Ting; Tsai, Jung-Hui; Liu, Wen-Chau
2016-10-01
Based on an electrophoretic deposition (EPD)-gate approach, a Pt/AlGaN/GaN heterostructure field-effect transistor (HFET) is fabricated and investigated at higher temperatures. The Pt/AlGaN interface with nearly oxide-free is verified by an Auger Electron Spectroscopy (AES) depth profile for the studied EPD-HFET. This result substantially enhances device performance at room temperature (300 K). Experimentally, the studied EPD-HFET exhibits a high turn-on voltage, a well suppression on gate leakage, a superior maximum drain saturation current, and an excellent extrinsic transconductance. Moreover, the microwave performance of an EPD-HFET is demonstrated at room temperature. Consequentially, this EPD-gate approach gives a promise for high-performance electronic applications.
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-09-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam
2016-01-01
Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors. A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r2, concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774
Pediatric Flexible Flatfoot; Clinical Aspects and Algorithmic Approach
Halabchi, Farzin; Mazaheri, Reza; Mirshahi, Maryam; Abbasian, Ladan
2013-01-01
Flatfoot constitutes the major cause of clinic visits for pediatric foot problems. The reported prevalence of flatfoot varies widely due to numerous factors. It can be divided into flexible and rigid flatfoot. Diagnosis and management of pediatric flatfoot has long been the matter of controversy. Common assessment tools include visual inspection, anthropometric values, footprint parameters and radiographic evaluation. Most flexible flatfeet are physiologic, asymptomatic, and require no treatment. Otherwise, the physician should treat symptomatic flexible flatfeet. Initial treatment options include activity modification, proper shoe and orthoses, exercises and medication. Furthermore, comorbidities such as obesity and ligamenous laxity should be identified and managed, if applicable. When all nonsurgical treatment options faile, surgery can be considered. Our purpose in this article is to present a clinical algorithmic approach to pediatric flatfoot. PMID:23795246
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Romero, Eduardo; Martínez, Alfonso; Oteo, Marta; García, Angel; Morcillo, Miguel Angel
2016-01-01
(68)Ga-DOTA-peptides are a promising PET radiotracers used in the detection of different tumours types due to their ability for binding specifically receptors overexpressed in these. Furthermore, (68)Ga can be produced by a (68)Ge/(68)Ga generator on site which is a very good alternative to cyclotron-based PET isotopes. Here, we describe a manual labelling approach for the synthesis of (68)Ga-labelled DOTA-peptides based on concentration and purification of the commercial (68)Ga/(68)Ga generator eluate using an anion exchange-cartridge. (68)Ga-DOTA-TATE was used to image a pheochromocytoma xenograft mouse model by a microPET/CT scanner. The method described provides satisfactory results, allowing the subsequent (68)Ga use to label DOTA-peptides. The simplicity of the method along with its implementation reduced cost, makes it useful in preclinical PET studies.
NASA Astrophysics Data System (ADS)
Courel, Maykel; Rimada, Julio C.; Hernández, Luis
2012-09-01
A new type of photovoltaic device where GaAs/GaInNAs multiple quantum wells (MQW) or superlattice (SL) are inserted in the i-region of a GaAs p-i-n solar cell (SC) is presented. The results suggest the device can reach record efficiencies for single-junction solar cells. A theoretical model is developed to study the performance of this device. The conversion efficiency as a function of wells width and depth is modeled for MQW solar cells. It is shown that the MQW solar cells reach high conversion efficiency values. A study of the SL solar cell viability is also presented. The conditions for resonant tunneling are established by the matrix transfer method for a superlattice with variable quantum wells width. The effective density of states and the absorption coefficient for SL structure are calculated in order to determinate the J-V characteristic. The influence of superlattice length on the conversion efficiency is researched, showing a better performance when width and cluster numbers are increased. The SL solar cell conversion efficiency is compared with the maximum conversion efficiency obtained for the MQW solar cell and shows an efficiency enhancement.
Approach to Complex Upper Extremity Injury: An Algorithm
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-01-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Zhuang, Weibing; Gao, Zhihong; Wang, Liangju; Zhong, Wenjun; Ni, Zhaojun; Zhang, Zhen
2013-11-01
Hormones are closely associated with dormancy in deciduous fruit trees, and gibberellins (GAs) are known to be particularly important. In this study, we observed that GA4 treatment led to earlier bud break in Japanese apricot. To understand better the promoting effect of GA4 on the dormancy release of Japanese apricot flower buds, proteomic and transcriptomic approaches were used to analyse the mechanisms of dormancy release following GA4 treatment, based on two-dimensional gel electrophoresis (2-DE) and digital gene expression (DGE) profiling, respectively. More than 600 highly reproducible protein spots (P<0.05) were detected and, following GA4 treatment, 38 protein spots showed more than a 2-fold difference in expression, and 32 protein spots were confidently identified according to the databases. Compared with water treatment, many proteins that were associated with energy metabolism and oxidation-reduction showed significant changes after GA4 treatment, which might promote dormancy release. We observed that genes at the mRNA level associated with energy metabolism and oxidation-reduction also played an important role in this process. Analysis of the functions of the identified proteins and genes and the related metabolic pathways would provide a comprehensive proteomic and transcriptomic view of the coordination of dormancy release after GA4 treatment in Japanese apricot flower buds.
Nie, Yung-mau
2016-01-14
A first-principles approach incorporating the concept of toroidal moments as a measure of the spin vortex is proposed and applied to simulate the toroidization of magnetoelectric multiferroic GaFeO{sub 3}. The nature of space-inversion and time-reversal violations of ferrotoroidics is reproduced in the simulated magnetic structure of GaFeO{sub 3}. For undoped GaFeO{sub 3}, a toroidal moment of −22.38 μ{sub B} Å per unit cell was obtained, which is the best theoretical estimate till date. Guided by the spin vortex free-energy minimization perturbed by an externally applied field, it was discovered that the minority spin markedly biases the whole toroidization. In summary, this approach not only calculates the toroidal moment but provides a way to understand the toroidal nature of magnetoelectric multiferroics.
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
A Functional Programming Approach to AI Search Algorithms
ERIC Educational Resources Information Center
Panovics, Janos
2012-01-01
The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…
Flower pollination algorithm: A novel approach for multiobjective optimization
NASA Astrophysics Data System (ADS)
Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi
2014-09-01
Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.
A Genetic Algorithm Approach for the TV Self-Promotion Assignment Problem
NASA Astrophysics Data System (ADS)
Pereira, Paulo A.; Fontes, Fernando A. C. C.; Fontes, Dalila B. M. M.
2009-09-01
We report on the development of a Genetic Algorithm (GA), which has been integrated into a Decision Support System to plan the best assignment of the weekly self-promotion space for a TV station. The problem addressed consists on deciding which shows to advertise and when such that the number of viewers, of an intended group or target, is maximized. The GA proposed incorporates a greedy heuristic to find good initial solutions. These solutions, as well as the solutions later obtained through the use of the GA, go then through a repair procedure. This is used with two objectives, which are addressed in turn. Firstly, it checks the solution feasibility and if unfeasible it is fixed by removing some shows. Secondly, it tries to improve the solution by adding some extra shows. Since the problem faced by the commercial TV station is too big and has too many features it cannot be solved exactly. Therefore, in order to test the quality of the solutions provided by the proposed GA we have randomly generated some smaller problem instances. For these problems we have obtained solutions on average within 1% of the optimal solution value.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-01-01
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm; as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates. PMID:27101930
NASA Technical Reports Server (NTRS)
Li, C.-J.; Sun, Q.; Lagowski, J.; Gatos, H. C.
1985-01-01
The microscale characterization of electronic defects in (SI) GaAs has been a challenging issue in connection with materials problems encountered in GaAs IC technology. The main obstacle which limits the applicability of high resolution electron beam methods such as Electron Beam-Induced Current (EBIC) and cathodoluminescence (CL) is the low concentration of free carriers in semiinsulating (SI) GaAs. The present paper provides a new photo-EBIC characterization approach which combines the spectroscopic advantages of optical methods with the high spatial resolution and scanning capability of EBIC. A scanning electron microscope modified for electronic characterization studies is shown schematically. The instrument can operate in the standard SEM mode, in the EBIC modes (including photo-EBIC and thermally stimulated EBIC /TS-EBIC/), and in the cathodo-luminescence (CL) and scanning modes. Attention is given to the use of CL, Photo-EBIC, and TS-EBIC techniques.
NASA Astrophysics Data System (ADS)
Anderson, Richard P.
An algorithm for precision approach guidance using GPS and a MicroElectroMechanical Systems/Inertial Navigation System (MEMS/INS) has been developed to meet the Required Navigational Performance (RNP) at a cost that is suitable for General Aviation (GA) applications. This scheme allows for accurate approach guidance (Category I) using Wide Area Augmentation System (WAAS) at locations not served by ILS, MLS or other types of precision landing guidance, thereby greatly expanding the number of useable airports in poor weather. At locations served by a Local Area Augmentation System (LAAS), Category III-like navigation is possible with the novel idea of a Missed Approach Time (MAT) that is similar to a Missed Approach Point (MAP) but not fixed in space. Though certain augmented types of GPS have sufficient precision for approach navigation, its use alone is insufficient to meet RNP due to an inability to monitor loss, degradation or intentional spoofing and meaconing of the GPS signal. A redundant navigation system and a health monitoring system must be added to acquire sufficient reliability, safety and time-to-alert as stated by required navigation performance. An inertial navigation system is the best choice, as it requires no external radio signals and its errors are complementary to GPS. An aiding Kalman filter is used to derive parameters that monitor the correlation between the GPS and MEMS/INS. These approach guidance parameters determines the MAT for a given RNP and provide the pilot or autopilot with proceed/do-not-proceed decision in real time. The enabling technology used to derive the guidance program is a MEMS gyroscope and accelerometer package in conjunction with a single-antenna pseudo-attitude algorithm. To be viable for most GA applications, the hardware must be reasonably priced. The MEMS gyros allows for the first cost-effective INS package to be developed. With lower cost, however, comes higher drift rates and a more dependence on GPS aiding. In
Monte Carlo simulation of the kinetic effects on GaAs/GaAs(001) MBE growth
NASA Astrophysics Data System (ADS)
Ageev, Oleg A.; Solodovnik, Maxim S.; Balakirev, Sergey V.; Mikhaylin, Ilya A.; Eremenko, Mikhail M.
2017-01-01
The molecular beam epitaxial growth of GaAs on the GaAs(001)-(2×4) surface is investigated using a kinetic Monte Carlo-based method. The developed algorithm permits to focus on the kinetic effects in a wide range of growth conditions and enables considerable computational speedup. The simulation results show that the growth rate has a dramatic influence upon both the island morphology and Ga surface diffusion length. The average island size reduces with increasing growth rate while the island density increases with increasing growth rate as well as As4/Ga beam equivalent pressure ratio. As the growth rate increases, the island density becomes weaker dependent upon the As4/Ga pressure ratio and approaches to a saturation value. We also discuss three characteristics of Ga surface diffusion, namely a diffusion length of a Ga adatom deposited first, an average diffusion length, and an island spacing as an average distance between islands. The calculations show that the As4/Ga pressure ratio dependences of these characteristics obey the same law, but with different coefficients. An increase of the As4/Ga pressure ratio leads to a decrease in both the diffusion length and island spacing. However, its influence becomes stronger with increasing growth rate for the first Ga adatom diffusion length and weaker for the average diffusion length and for the island spacing.
Adaptive quasi-Newton algorithm for source extraction via CCA approach.
Zhang, Wei-Tao; Lou, Shun-Tian; Feng, Da-Zheng
2014-04-01
This paper addresses the problem of adaptive source extraction via the canonical correlation analysis (CCA) approach. Based on Liu's analysis of CCA approach, we propose a new criterion for source extraction, which is proved to be equivalent to the CCA criterion. Then, a fast and efficient online algorithm using quasi-Newton iteration is developed. The stability of the algorithm is also analyzed using Lyapunov's method, which shows that the proposed algorithm asymptotically converges to the global minimum of the criterion. Simulation results are presented to prove our theoretical analysis and demonstrate the merits of the proposed algorithm in terms of convergence speed and successful rate for source extraction.
A genetic algorithm approach in interface and surface structure optimization
Zhang, Jian
2010-01-01
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.
Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling
2013-11-01
Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Performance analysis of LVQ algorithms: a statistical physics approach.
Ghosh, Anarta; Biehl, Michael; Hammer, Barbara
2006-01-01
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest prototype classification. However, original LVQ has been introduced based on heuristics and numerous modifications exist to achieve better convergence and stability. Recently, a mathematical foundation by means of a cost function has been proposed which, as a limiting case, yields a learning rule similar to classical LVQ2.1. It also motivates a modification which shows better stability. However, the exact dynamics as well as the generalization ability of many LVQ algorithms have not been thoroughly investigated so far. Using concepts from statistical physics and the theory of on-line learning, we present a mathematical framework to analyse the performance of different LVQ algorithms in a typical scenario in terms of their dynamics, sensitivity to initial conditions, and generalization ability. Significant differences in the algorithmic stability and generalization ability can be found already for slightly different variants of LVQ. We study five LVQ algorithms in detail: Kohonen's original LVQ1, unsupervised vector quantization (VQ), a mixture of VQ and LVQ, LVQ2.1, and a variant of LVQ which is based on a cost function. Surprisingly, basic LVQ1 shows very good performance in terms of stability, asymptotic generalization ability, and robustness to initializations and model parameters which, in many cases, is superior to recent alternative proposals.
A compensatory algorithm for the slow-down effect on constant-time-separation approaches
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
1991-01-01
In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.
A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains
NASA Astrophysics Data System (ADS)
Gan, Tingyue; Cameron, Maria
2017-01-01
Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.
Identification of the Roessler system: algebraic approach and genetic algorithms
NASA Astrophysics Data System (ADS)
Ibanez, C. A.; Sanchez, J. H.; Suarez, M. S. C.; Flores, F. A.; Garrido, R. M.; Martinez, R. G.
2005-10-01
This article presents a method to determine the parameters of Rossler's attractor in a very approximated way, by means of observations of an available variable. It is shown that the system is observable and identifiable algebraically with respect to the chosen output. This fact allows to construct a differential parametrization of the output and its derivatives. Using this parametrization an identification scheme based on least mean squares is established and the solution is found with a genetic algorithm.
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
NASA Technical Reports Server (NTRS)
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
GA-ANFIS Expert System Prototype for Prediction of Dermatological Diseases.
Begic Fazlic, Lejla; Avdagic, Korana; Omanovic, Samir
2015-01-01
This paper presents novel GA-ANFIS expert system prototype for dermatological disease detection by using dermatological features and diagnoses collected in real conditions. Nine dermatological features are used as inputs to classifiers that are based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. After that, they are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validation of the novel GA-ANFIS system approach is performed in MATLAB environment by using validation set of data. Some conclusions concerning the impacts of features on the detection of dermatological diseases were obtained through analysis of the GA-ANFIS. We compared GA-ANFIS and ANFIS results. The results confirmed that the proposed GA-ANFIS model achieved accuracy rates which are higher than the ones we got by ANFIS model.
Zaki, Mohammad Reza; Varshosaz, Jaleh; Fathi, Milad
2015-05-20
Multivariate nature of drug loaded nanospheres manufacturing in term of multiplicity of involved factors makes it a time consuming and expensive process. In this study genetic algorithm (GA) and artificial neural network (ANN), two tools inspired by natural process, were employed to optimize and simulate the manufacturing process of agar nanospheres. The efficiency of GA was evaluated against the response surface methodology (RSM). The studied responses included particle size, poly dispersity index, zeta potential, drug loading and release efficiency. GA predicted greater extremum values for response factors compared to RSM. However, real values showed some deviations from predicted data. Appropriate agreement was found between ANN model predicted and real values for all five response factors with high correlation coefficients. GA was more successful than RSM in optimization and along with ANN were efficient tools in optimizing and modeling the fabrication process of drug loaded in agar nanospheres.
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
An Approach to the Programming of Biased Regression Algorithms.
1978-11-01
Due to the near nonexistence of computer algorithms for calculating estimators and ancillary statistics that are needed for biased regression methodologies, many users of these methodologies are forced to write their own programs. Brute-force coding of such programs can result in a great waste of computer core and computing time, as well as inefficient and inaccurate computing techniques. This article proposes some guides to more efficient programming by taking advantage of mathematical similarities among several of the more popular biased regression estimators.
Genetic-based EM algorithm for learning Gaussian mixture models.
Pernkopf, Franz; Bouchaffra, Djamel
2005-08-01
We propose a genetic-based expectation-maximization (GA-EM) algorithm for learning Gaussian mixture models from multivariate data. This algorithm is capable of selecting the number of components of the model using the minimum description length (MDL) criterion. Our approach benefits from the properties of Genetic algorithms (GA) and the EM algorithm by combination of both into a single procedure. The population-based stochastic search of the GA explores the search space more thoroughly than the EM method. Therefore, our algorithm enables escaping from local optimal solutions since the algorithm becomes less sensitive to its initialization. The GA-EM algorithm is elitist which maintains the monotonic convergence property of the EM algorithm. The experiments on simulated and real data show that the GA-EM outperforms the EM method since: 1) We have obtained a better MDL score while using exactly the same termination condition for both algorithms. 2) Our approach identifies the number of components which were used to generate the underlying data more often than the EM algorithm.
Random matrix approach to quantum adiabatic evolution algorithms
Boulatov, A.; Smelyanskiy, V.N.
2005-05-15
We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.
Scheduling language and algorithm development study. Appendix: Study approach and activity summary
NASA Technical Reports Server (NTRS)
1974-01-01
The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
Effective and efficient optics inspection approach using machine learning algorithms
Abdulla, G; Kegelmeyer, L; Liao, Z; Carr, W
2010-11-02
The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is 'truthed' or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called 'Avatar Tools' is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.
Genetic algorithm based image binarization approach and its quantitative evaluation via pooling
NASA Astrophysics Data System (ADS)
Hu, Huijun; Liu, Ya; Liu, Maofu
2015-12-01
The binarized image is very critical to image visual feature extraction, especially shape feature, and the image binarization approaches have been attracted more attentions in the past decades. In this paper, the genetic algorithm is applied to optimizing the binarization threshold of the strip steel defect image. In order to evaluate our genetic algorithm based image binarization approach in terms of quantity, we propose the novel pooling based evaluation metric, motivated by information retrieval community, to avoid the lack of ground-truth binary image. Experimental results show that our genetic algorithm based binarization approach is effective and efficiency in the strip steel defect images and our quantitative evaluation metric on image binarization via pooling is also feasible and practical.
Brasier, Martin D; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-21
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions.
NASA Astrophysics Data System (ADS)
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions.
A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni-Mn-Ga
NASA Astrophysics Data System (ADS)
Faran, Eilon; Shilo, Doron
2016-09-01
The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni-Mn-Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni-Mn-Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni-Mn-Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.
Diode Characteristics Approaching Bulk Limits in GaAs Nanowire Array Photodetectors.
Farrell, Alan C; Senanayake, Pradeep; Meng, Xiao; Hsieh, Nick Y; Huffaker, Diana L
2017-04-12
We present the electrical properties of p-n junction photodetectors comprised of vertically oriented p-GaAs nanowire arrays on the n-GaAs substrate. We measure an ideality factor as low as n = 1.0 and a rectification ratio >10(8) across all devices, with some >10(9), comparable to the best GaAs thin film photodetectors. An analysis of the Arrhenius plot of the saturation current yields an activation energy of 690 meV-approximately half the bandgap of GaAs-indicating generation-recombination current from midgap states is the primary contributor to the leakage current at low bias. Using fully three-dimensional electrical simulations, we explain the lack of a recombination current dominated regime at low forward bias, as well as some of the issues related to analysis of the capacitance-voltage characteristics of nanowire devices. This work demonstrates that, through proper design and fabrication, nanowire-based devices can perform as well as their bulk device counterparts.
One-qubit quantum gates in a circular graphene quantum dot: genetic algorithm approach.
Amparán, Gibrán; Rojas, Fernando; Pérez-Garrido, Antonio
2013-05-16
The aim of this work was to design and control, using genetic algorithm (GA) for parameter optimization, one-charge-qubit quantum logic gates σx, σy, and σz, using two bound states as a qubit space, of circular graphene quantum dots in a homogeneous magnetic field. The method employed for the proposed gate implementation is through the quantum dynamic control of the qubit subspace with an oscillating electric field and an onsite (inside the quantum dot) gate voltage pulse with amplitude and time width modulation which introduce relative phases and transitions between states. Our results show that we can obtain values of fitness or gate fidelity close to 1, avoiding the leakage probability to higher states. The system evolution, for the gate operation, is presented with the dynamics of the probability density, as well as a visualization of the current of the pseudospin, characteristic of a graphene structure. Therefore, we conclude that is possible to use the states of the graphene quantum dot (selecting the dot size and magnetic field) to design and control the qubit subspace, with these two time-dependent interactions, to obtain the optimal parameters for a good gate fidelity using GA.
One-qubit quantum gates in a circular graphene quantum dot: genetic algorithm approach
2013-01-01
The aim of this work was to design and control, using genetic algorithm (GA) for parameter optimization, one-charge-qubit quantum logic gates σx, σy, and σz, using two bound states as a qubit space, of circular graphene quantum dots in a homogeneous magnetic field. The method employed for the proposed gate implementation is through the quantum dynamic control of the qubit subspace with an oscillating electric field and an onsite (inside the quantum dot) gate voltage pulse with amplitude and time width modulation which introduce relative phases and transitions between states. Our results show that we can obtain values of fitness or gate fidelity close to 1, avoiding the leakage probability to higher states. The system evolution, for the gate operation, is presented with the dynamics of the probability density, as well as a visualization of the current of the pseudospin, characteristic of a graphene structure. Therefore, we conclude that is possible to use the states of the graphene quantum dot (selecting the dot size and magnetic field) to design and control the qubit subspace, with these two time-dependent interactions, to obtain the optimal parameters for a good gate fidelity using GA. PMID:23680153
Prediction of Heart Attack Risk Using GA-ANFIS Expert System Prototype.
Begic Fazlic, Lejla; Avdagic, Aja; Besic, Ingmar
2015-01-01
The aim of this research is to develop a novel GA-ANFIS expert system prototype for classifying heart disease degree of a patient by using heart diseases attributes (features) and diagnoses taken in the real conditions. Thirteen attributes have been used as inputs to classifiers being based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. They are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validating of the novel GA-ANFIS system approach is performed in MATLAB environment. We compared GA-ANFIS and ANFIS results. The proposed GA-ANFIS model with the predicted value technique is more efficient when diagnosis of heart disease is concerned, as well the earlier method we got by ANFIS model.
Electronic structure of the layer compounds GaSe and InSe in a tight-binding approach
NASA Astrophysics Data System (ADS)
Camara, M. O.; Mauger, A.; Devos, I.
2002-03-01
The three-dimensional band structure of the III-VI layer compounds GaSe and InSe has been investigated in the tight-binding approach. The pseudo-Hamiltonian matrix elements in the sp3s* basis are fit in order to reproduce the nonlocal pseudopotential band structure, in the framework of constrained optimization techniques using the conjugate gradient method. The results are in good agreement with the optical and photoemission experimental data. The scaling laws appropriate to the covalent bonding are violated by a fraction of eV only, which suggests that the interlayer interactions are not solely of the van der Waals type.
1983-12-15
T TAVIS UCASIFIED 15 DEC 83 TR-0 84(4925-03)-1 SD-TR-83-75 F/G 20/5 NENLS OE 0 ONE so hiE EhhhhhhhhhhhhE ImI INShIhIhhhhE . 1.8 11111 I2 fl...Amplification 00C in a Double Heterostructure GaAs Device Using the Density Matrix Approach M. T. TAVIS Electronics Research Laboratory Laboratory Operations... Tavis F04701-83-C-0084 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK The Aerospace Corporation AE OKUI UBR El Segundo
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Evaluation of a new approach for speech enhancement algorithms in hearing aids.
Montazeri, Vahid; Khoubrouy, Soudeh A; Panahi, Issa M S
2012-01-01
Several studies on hearing impaired people who use hearing aid reveal that speech enhancement algorithms implemented in hearing aids improve listening comfort. However, these algorithms do not improve speech intelligibility too much and in many cases they decrease the speech intelligibility, both in hearing-impaired and in normally hearing people. In fact, current approaches for development of the speech enhancement algorithms (e.g. minimum mean square error (MMSE)) are not optimal for intelligibility improvement. Some recent studies investigated the effect of different distortions on the enhanced speech and realized that by controlling the amplification distortion, the intelligibility improves dramatically. In this paper, we examined, subjectively and objectively, the effects of amplification distortion on the speech enhanced by two algorithms in three background noises at different SNR levels.
Hybridization of GA and ANN to Solve Graph Coloring
NASA Astrophysics Data System (ADS)
Maitra, Timir; Pal, Anindya J.; Choi, Minkyu; Kim, Taihoon
A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present one efficient hybrid algorithms for the graph coloring problem. Here we have considered the hybridization of Boltzmann Machine (BM) of Artificial Neural Network with Genetic Algorithms. Genetic algorithm we have used to generate different coloration of a graph quickly on which we have applied boltzmann machine approach. Unlike traditional approaches of GA and ANN the proposed hybrid algorithm is guranteed to have 100% convergence rate to valid solution with no parameter tuning. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive. Analysis of the behavior of the algorithm sheds light on ways to further improvement.
Iterative Fourier transform algorithm: different approaches to diffractive optical element design
NASA Astrophysics Data System (ADS)
Skeren, Marek; Richter, Ivan; Fiala, Pavel
2002-10-01
This contribution focuses on the study and comparison of different design approaches for designing phase-only diffractive optical elements (PDOEs) for different possible applications in laser beam shaping. Especially, new results and approaches, concerning the iterative Fourier transform algorithm, are analyzed, implemented, and compared. Namely, various approaches within the iterative Fourier transform algorithm (IFTA) are analyzed for the case of phase-only diffractive optical elements with quantizied phase levels (either binary or multilevel structures). First, the general scheme of the IFTA iterative approach with partial quantization is briefly presented and discussed. Then, the special assortment of the general IFTA scheme is given with respect to quantization constraint strategies. Based on such a special classification, the three practically interesting approaches are chosen, further-analyzed, and compared to eachother. The performance of these algorithms is compared in detail in terms of the signal-to-noise ratio characteristic developments with respect to the numberof iterations, for various input diffusive-type objects chose. Also, the performance is documented on the complex spectra developments for typical computer reconstruction results. The advantages and drawbacks of all approaches are discussed, and a brief guide on the choice of a particular approach for typical design tasks is given. Finally, the two ways of amplitude elimination within the design procedure are considered, namely the direct elimination and partial elimination of the amplitude of the complex hologram function.
Bhattacharya, Mahua; Das, Arpita
2011-01-01
Medical image fusion has been used to derive the useful complimentary information from multimodal images. The prior step of fusion is registration or proper alignment of test images for accurate extraction of detail information. For this purpose, the images to be fused are geometrically aligned using mutual information (MI) as similarity measuring metric followed by genetic algorithm to maximize MI. The proposed fusion strategy incorporating multi-resolution approach extracts more fine details from the test images and improves the quality of composite fused image. The proposed fusion approach is independent of any manual marking or knowledge of fiducial points and starts the procedure automatically. The performance of proposed genetic-based fusion methodology is compared with fuzzy clustering algorithm-based fusion approach, and the experimental results show that genetic-based fusion technique improves the quality of the fused image significantly over the fuzzy approaches.
Ju, Chunhua
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods. PMID:24381525
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three
Tumuluru, J.S.; Sokhansanj, Shahabaddine
2008-12-01
Abstract In the present study, response surface method (RSM) and genetic algorithm (GA) were used to study the effects of process variables like screw speed, rpm (x1), L/D ratio (x2), barrel temperature ( C; x3), and feed mix moisture content (%; x4), on flow rate of biomass during single-screw extrusion cooking. A second-order regression equation was developed for flow rate in terms of the process variables. The significance of the process variables based on Pareto chart indicated that screw speed and feed mix moisture content had the most influence followed by L/D ratio and barrel temperature on the flow rate. RSM analysis indicated that a screw speed>80 rpm, L/D ratio> 12, barrel temperature>80 C, and feed mix moisture content>20% resulted in maximum flow rate. Increase in screw speed and L/D ratio increased the drag flow and also the path of traverse of the feed mix inside the extruder resulting in more shear. The presence of lipids of about 35% in the biomass feed mix might have induced a lubrication effect and has significantly influenced the flow rate. The second-order regression equations were further used as the objective function for optimization using genetic algorithm. A population of 100 and iterations of 100 have successfully led to convergence the optimum. The maximum and minimum flow rates obtained using GA were 13.19 10 7 m3/s (x1=139.08 rpm, x2=15.90, x3=99.56 C, and x4=59.72%) and 0.53 10 7 m3/s (x1=59.65 rpm, x2= 11.93, x3=68.98 C, and x4=20.04%).
ERIC Educational Resources Information Center
Moreno, Julian; Ovalle, Demetrio A.; Vicari, Rosa M.
2012-01-01
Considering that group formation is one of the key processes in collaborative learning, the aim of this paper is to propose a method based on a genetic algorithm approach for achieving inter-homogeneous and intra-heterogeneous groups. The main feature of such a method is that it allows for the consideration of as many student characteristics as…
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species’ trait approachHenry Lee II, Christina Folger, Deborah A. Reusser, Patrick Clinton, and Rene Graham1 U.S. EPA, Western Ecology Division, Newport, OR USA E-mail: lee.henry@ep...
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme.
Parallel Genetic Algorithm for Alpha Spectra Fitting
NASA Astrophysics Data System (ADS)
García-Orellana, Carlos J.; Rubio-Montero, Pilar; González-Velasco, Horacio
2005-01-01
We present a performance study of alpha-particle spectra fitting using parallel Genetic Algorithm (GA). The method uses a two-step approach. In the first step we run parallel GA to find an initial solution for the second step, in which we use Levenberg-Marquardt (LM) method for a precise final fit. GA is a high resources-demanding method, so we use a Beowulf cluster for parallel simulation. The relationship between simulation time (and parallel efficiency) and processors number is studied using several alpha spectra, with the aim of obtaining a method to estimate the optimal processors number that must be used in a simulation.
Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration
NASA Astrophysics Data System (ADS)
Shafii, M.; de Smedt, F.
2009-04-01
Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.
An algorithmic approach to adaptive state filtering using recurrent neural networks.
Parlos, A G; Menon, S K; Atiya, A
2001-01-01
Practical algorithms are presented for adaptive state filtering in nonlinear dynamic systems when the state equations are unknown. The state equations are constructively approximated using neural networks. The algorithms presented are based on the two-step prediction-update approach of the Kalman filter. The proposed algorithms make minimal assumptions regarding the underlying nonlinear dynamics and their noise statistics. Non-adaptive and adaptive state filtering algorithms are presented with both off-line and online learning stages. The algorithms are implemented using feedforward and recurrent neural network and comparisons are presented. Furthermore, extended Kalman filters (EKFs) are developed and compared to the filter algorithms proposed. For one of the case studies, the EKF converges but results in higher state estimation errors that the equivalent neural filters. For another, more complex case study with unknown system dynamics and noise statistics, the developed EKFs do not converge. The off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. Online training further enhances the estimation accuracy of the developed adaptive filters, effectively decoupling the eventual filter accuracy from the accuracy of the process model.
Caco-2 cell permeability modelling: a neural network coupled genetic algorithm approach
NASA Astrophysics Data System (ADS)
Di Fenza, Armida; Alagona, Giuliano; Ghio, Caterina; Leonardi, Riccardo; Giolitti, Alessandro; Madami, Andrea
2007-04-01
The ability to cross the intestinal cell membrane is a fundamental prerequisite of a drug compound. However, the experimental measurement of such an important property is a costly and highly time consuming step of the drug development process because it is necessary to synthesize the compound first. Therefore, in silico modelling of intestinal absorption, which can be carried out at very early stages of drug design, is an appealing alternative procedure which is based mainly on multivariate statistical analysis such as partial least squares (PLS) and neural networks (NN). Our implementation of neural network models for the prediction of intestinal absorption is based on the correlation of Caco-2 cell apparent permeability ( P app) values, as a measure of intestinal absorption, to the structures of two different data sets of drug candidates. Several molecular descriptors of the compounds were calculated and the optimal subsets were selected using a genetic algorithm; therefore, the method was indicated as Genetic Algorithm-Neural Network (GA-NN). A methodology combining a genetic algorithm search with neural network analysis applied to the modelling of Caco-2 P app has never been presented before, although the two procedures have been already employed separately. Moreover, we provide new Caco-2 cell permeability measurements for more than two hundred compounds. Interestingly, the selected descriptors show to possess physico-chemical connotations which are in excellent accordance with the well known relevant molecular properties involved in the cellular membrane permeation phenomenon: hydrophilicity, hydrogen bonding propensity, hydrophobicity and molecular size. The predictive ability of the models, although rather good for a preliminary study, is somewhat affected by the poor precision of the experimental Caco-2 measurements. Finally, the generalization ability of one model was checked on an external test set not derived from the data sets used to build the models
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
2011-01-01
Background The rapid identification of Bacillus spores and bacterial identification are paramount because of their implications in food poisoning, pathogenesis and their use as potential biowarfare agents. Many automated analytical techniques such as Curie-point pyrolysis mass spectrometry (Py-MS) have been used to identify bacterial spores giving use to large amounts of analytical data. This high number of features makes interpretation of the data extremely difficult We analysed Py-MS data from 36 different strains of aerobic endospore-forming bacteria encompassing seven different species. These bacteria were grown axenically on nutrient agar and vegetative biomass and spores were analyzed by Curie-point Py-MS. Results We develop a novel genetic algorithm-Bayesian network algorithm that accurately identifies sand selects a small subset of key relevant mass spectra (biomarkers) to be further analysed. Once identified, this subset of relevant biomarkers was then used to identify Bacillus spores successfully and to identify Bacillus species via a Bayesian network model specifically built for this reduced set of features. Conclusions This final compact Bayesian network classification model is parsimonious, computationally fast to run and its graphical visualization allows easy interpretation of the probabilistic relationships among selected biomarkers. In addition, we compare the features selected by the genetic algorithm-Bayesian network approach with the features selected by partial least squares-discriminant analysis (PLS-DA). The classification accuracy results show that the set of features selected by the GA-BN is far superior to PLS-DA. PMID:21269434
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN‑1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
ERIC Educational Resources Information Center
Reese, Debbie Denise; Tabachnick, Barbara G.
2010-01-01
In this paper, the authors summarize a quantitative analysis demonstrating that the CyGaMEs toolset for embedded assessment of learning within instructional games measures growth in conceptual knowledge by quantifying player behavior. CyGaMEs stands for Cyberlearning through GaME-based, Metaphor Enhanced Learning Objects. Some scientists of…
NASA Astrophysics Data System (ADS)
Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui
2016-12-01
Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.
Low Back Pain in Children and Adolescents: an Algorithmic Clinical Approach
Kordi, Ramin; Rostami, Mohsen
2011-01-01
Low back pain (LBP) is common among children and adolescents. In younger children particularly those under 3, LBP should be considered as an alarming sign for more serious underlying pathologies. However, similar to adults, non specific low back pain is the most common type of LBP among children and adolescents. In this article, a clinical algorithmic approach to LBP in children and adolescents is presented. PMID:23056800
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Simulation approach to charge sharing compensation algorithms with experimental cross-check
NASA Astrophysics Data System (ADS)
Krzyżanowska, A.; Deptuch, G.; Maj, P.; Gryboś, P.; Szczygieł, R.
2017-03-01
Hybrid pixel detectors for X-ray imaging, working in a single photon counting mode, find applications in a variety of fields, such as medical imaging, material science or industry. However, charge sharing, which occurs when a photon hits a detector in the area between two or four pixels, becomes more significant with decreasing pixel size. If the charge generated when a photon interacts with a detector is collected by more than one pixel, the photon energy and the event position may be improperly detected. Therefore, algorithms for minimization of the impact of charge sharing on a pixel detector for X-ray detection need to be implemented. Firstly, such algorithms must be assessed on a simulation level. The goal is to implement the simulations in such a way that the simulation accuracy and simulation time are optimized. A model should be flexible enough so that it can be quickly adapted for other uses. We propose behavioral models implemented in the Cadence® Virtuoso® environment. This is a solution that enables fast validation of the system at the higher level of abstraction allowing deep verification. A readout channel of a chip is represented using parameterized behavioral blocks of different functionality, such as, a charge sensitive amplifier, shapers, discriminators, comparators. The inter-pixel connections are taken into account. This approach enables top-down design and optimization of parameters. The model was implemented in particular to test the C8P1 algorithm used in the Chase Jr. chip, however, due to its modular implementation, it can be easily adjusted to further test of the algorithms. The simulation approach is described and the simulation results are presented together with the experimental data obtained during synchrotron measurements for the Chase Jr. chip with the C8P1 algorithm implemented.
Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A
2011-06-14
We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface.
Jin, Cong; Jin, Shu-Wei
2016-06-01
A number of different gene selection approaches based on gene expression profiles (GEP) have been developed for tumour classification. A gene selection approach selects the most informative genes from the whole gene space, which is an important process for tumour classification using GEP. This study presents an improved swarm intelligent optimisation algorithm to select genes for maintaining the diversity of the population. The most essential characteristic of the proposed approach is that it can automatically determine the number of the selected genes. On the basis of the gene selection, the authors construct a variety of the tumour classifiers, including the ensemble classifiers. Four gene datasets are used to evaluate the performance of the proposed approach. The experimental results confirm that the proposed classifiers for tumour classification are indeed effective.
Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.
Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset
NASA Technical Reports Server (NTRS)
Ramasso, Emannuel; Saxena, Abhinav
2014-01-01
Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.
Processing approach towards the formation of thin-film Cu(In,Ga)Se2
Beck, Markus E.; Noufi, Rommel
2003-01-01
A two-stage method of producing thin-films of group IB-IIIA-VIA on a substrate for semiconductor device applications includes a first stage of depositing an amorphous group IB-IIIA-VIA precursor onto an unheated substrate, wherein the precursor contains all of the group IB and group IIIA constituents of the semiconductor thin-film to be produced in the stoichiometric amounts desired for the final product, and a second stage which involves subjecting the precursor to a short thermal treatment at 420.degree. C.-550.degree. C. in a vacuum or under an inert atmosphere to produce a single-phase, group IB-III-VIA film. Preferably the precursor also comprises the group VIA element in the stoichiometric amount desired for the final semiconductor thin-film. The group IB-IIIA-VIA semiconductor films may be, for example, Cu(In,Ga)(Se,S).sub.2 mixed-metal chalcogenides. The resultant supported group IB-IIIA-VIA semiconductor film is suitable for use in photovoltaic applications.
Fist Principles Approach to the Magneto Caloric Effect: Application to Ni2MnGa
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Rusanu, Aurelian; Eisenbach, Markus; Brown, Gregory; Evans, Boyd, III
2011-03-01
The magneto-caloric effect (MCE) has potential application in heating and cooling technologies. In this work, we present calculated magnetic structure of a candidate MCE material, Ni 2 MnGa. The magnetic configurations of a 144 atom supercell is first explored using first-principle, the results are then used to fit exchange parameters of a Heisenberg Hamiltonian. The Wang-Landau method is used to calculate the magnetic density of states of the Heisenberg Hamiltonian. Based on this classical estimate, the magnetic density of states is calculated using the Wang Landau method with energies obtained from the first principles method. The Currie temperature and other thermodynamic properties are calculated using the density of states. The relationships between the density of magnetic states and the field induced adiabatic temperature change and isothermal entropy change are discussed. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
NASA Astrophysics Data System (ADS)
Turner, David B.; Willett, Peter
2000-01-01
The EVA structural descriptor, based upon calculated fundamental molecular vibrational frequencies, has proved to be an effective descriptor for both QSAR and database similarity calculations. The descriptor is sensitive to 3D structure but has an advantage over field-based 3D-QSAR methods inasmuch as structural superposition is not required. The original technique involves a standardisation method wherein uniform Gaussians of fixed standard deviation (σ) are used to smear out frequencies projected onto a linear scale. The smearing function permits the overlap of proximal frequencies and thence the extraction of a fixed dimensional descriptor regardless of the number and precise values of the frequencies. It is proposed here that there exist optimal localised values of σ in different spectral regions; that is, the overlap of frequencies using uniform Gaussians may, at certain points in the spectrum, either be insufficient to pick up relationships where they exist or mix up information to such an extent that significant correlations are obscured by noise. A genetic algorithm is used to search for optimal localised σ values using crossvalidated PLS regression scores as the fitness score to be optimised. The resultant models were then validated against a previously unseen test set of compounds and through data scrambling. The performance of EVA_GA is compared to that of EVA and analogous CoMFA studies; in the latter case a brief evaluation is made of the effect of grid resolution upon the stability of CoMFA PLS scores particularly in relation to test set predictions.
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-09-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
Personalized therapy algorithms for type 2 diabetes: a phenotype-based approach
Ceriello, Antonio; Gallo, Marco; Candido, Riccardo; De Micheli, Alberto; Esposito, Katherine; Gentile, Sandro; Medea, Gerardo
2014-01-01
Type 2 diabetes is a progressive disease with a complex and multifactorial pathophysiology. Patients with type 2 diabetes show a variety of clinical features, including different “phenotypes” of hyperglycemia (eg, fasting/preprandial or postprandial). Thus, the best treatment choice is sometimes difficult to make, and treatment initiation or optimization is postponed. This situation may explain why, despite the existing complex therapeutic armamentarium and guidelines for the treatment of type 2 diabetes, a significant proportion of patients do not have good metabolic control and at risk of developing the late complications of diabetes. The Italian Association of Medical Diabetologists has developed an innovative personalized algorithm for the treatment of type 2 diabetes, which is available online. According to the main features shown by the patient, six algorithms are proposed, according to glycated hemoglobin (HbA1c, ≥9% or ≤9%), body mass index (≤30 kg/m2 or ≥30 kg/m2), occupational risk potentially related to hypoglycemia, chronic renal failure, and frail elderly status. Through self-monitoring of blood glucose, patients are phenotyped according to the occurrence of fasting/preprandial or postprandial hyperglycemia. In each of these six algorithms, the gradual choice of treatment is related to the identified phenotype. With one exception, these algorithms contain a stepwise approach for patients with type 2 diabetes who are metformin-intolerant. The glycemic targets (HbA1c, fasting/preprandial and postprandial glycemia) are also personalized. This accessible and easy to use algorithm may help physicians to choose a personalized treatment plan for each patient and to optimize it in a timely manner, thereby lessening clinical inertia. PMID:24971031
Genetic algorithms for route discovery.
Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy
2006-12-01
Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making.
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments.
Thomas, Brian L; Crandall, Aaron S; Cook, Diane J
2016-04-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments
Thomas, Brian L.; Crandall, Aaron S.; Cook, Diane J.
2016-01-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care. PMID:27453810
NASA Astrophysics Data System (ADS)
Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung
2016-12-01
In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
Geçen, Nazmiye; Sarıpınar, Emin; Yanmaz, Ersin; Sahin, Kader
2012-01-01
Two different approaches, namely the electron conformational and genetic algorithm methods (EC-GA), were combined to identify a pharmacophore group and to predict the antagonist activity of 1,4-dihydropyridines (known calcium channel antagonists) from molecular structure descriptors. To identify the pharmacophore, electron conformational matrices of congruity (ECMC)-which include atomic charges as diagonal elements and bond orders and interatomic distances as off-diagonal elements-were arranged for all compounds. The ECMC of the compound with the highest activity was chosen as a template and compared with the ECMCs of other compounds within given tolerances to reveal the electron conformational submatrix of activity (ECSA) that refers to the pharmacophore. The genetic algorithm was employed to search for the best subset of parameter combinations that contributes the most to activity. Applying the model with the optimum 10 parameters to training (50 compounds) and test (22 compounds) sets gave satisfactory results (R(2)(training)= 0.848, R(2)(test))= 0.904, with a cross-validated q(2) = 0.780).
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm.
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
A heuristic approach based on Clarke-Wright algorithm for open vehicle routing problem.
Pichpibul, Tantikorn; Kawtummachai, Ruengsak
2013-01-01
We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62).
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
A Heuristic Approach Based on Clarke-Wright Algorithm for Open Vehicle Routing Problem
2013-01-01
We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62). PMID:24382948
Exponential Gaussian approach for spectral modeling: The EGO algorithm I. Band saturation
NASA Astrophysics Data System (ADS)
Pompilio, Loredana; Pedrazzi, Giuseppe; Sgavetti, Maria; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.
2009-06-01
Curve fitting techniques are a widespread approach to spectral modeling in the VNIR range [Burns, R.G., 1970. Am. Mineral. 55, 1608-1632; Singer, R.B., 1981. J. Geophys. Res. 86, 7967-7982; Roush, T.L., Singer, R.B., 1986. J. Geophys. Res. 91, 10301-10308; Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. They have been successfully used to model reflectance spectra of powdered minerals and mixtures, natural rock samples and meteorites, and unknown remote spectra of the Moon, Mars and asteroids. Here, we test a new decomposition algorithm to model VNIR reflectance spectra and call it Exponential Gaussian Optimization (EGO). The EGO algorithm is derived from and complementary to the MGM of Sunshine et al. [Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. The general EGO equation has been especially designed to account for absorption bands affected by saturation and asymmetry. Here we present a special case of EGO and address it to model saturated electronic transition bands. Our main goals are: (1) to recognize and model band saturation in reflectance spectra; (2) to develop a basic approach for decomposition of rock spectra, where effects due to saturation are most prevalent; (3) to reduce the uncertainty related to quantitative estimation when band saturation is occurring. In order to accomplish these objectives, we simulate flat bands starting from pure Gaussians and test the EGO algorithm on those simulated spectra first. Then we test the EGO algorithm on a number of measurements acquired on powdered pyroxenes having different compositions and average grain size and binary mixtures of orthopyroxenes with barium sulfate. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of saturation effects on reflectance spectra of powdered minerals and mixtures; (2) the systematic dilution of a strong absorber using a bright neutral material is not
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
An algorithmic approach for breakage-fusion-bridge detection in tumor genomes
Zakov, Shay; Kinsella, Marcus; Bafna, Vineet
2013-01-01
Breakage-fusion-bridge (BFB) is a mechanism of genomic instability characterized by the joining and subsequent tearing apart of sister chromatids. When this process is repeated during multiple rounds of cell division, it leads to patterns of copy number increases of chromosomal segments as well as fold-back inversions where duplicated segments are arranged head-to-head. These structural variations can then drive tumorigenesis. BFB can be observed in progress using cytogenetic techniques, but generally BFB must be inferred from data such as microarrays or sequencing collected after BFB has ceased. Making correct inferences from this data is not straightforward, particularly given the complexity of some cancer genomes and BFB’s ability to generate a wide range of rearrangement patterns. Here we present algorithms to aid the interpretation of evidence for BFB. We first pose the BFB count-vector problem: given a chromosome segmentation and segment copy numbers, decide whether BFB can yield a chromosome with the given segment counts. We present a linear time algorithm for the problem, in contrast to a previous exponential time algorithm. We then combine this algorithm with fold-back inversions to develop tests for BFB. We show that, contingent on assumptions about cancer genome evolution, count vectors and fold-back inversions are sufficient evidence for detecting BFB. We apply the presented techniques to paired-end sequencing data from pancreatic tumors and confirm a previous finding of BFB as well as identify a chromosomal region likely rearranged by BFB cycles, demonstrating the practicality of our approach. PMID:23503850
NASA Astrophysics Data System (ADS)
Picozzi, Silvia; Asahi, Ryoji; Geller, Clint; Freeman, Arthur
2004-03-01
We present an ab-initio modeling approach for Auger recombination and impact ionization in semiconductors directed at i) quantitative rate determinations and 2) elucidating trends with respect to alloy composition, carrier concentration and temperature. We present a fully first-principles formalism (S.Picozzi, R.Asahi, C.B. Geller and A.J.Freeman, Phys.Rev.Lett. 89, 197601 (2002); Phys.Rev.B 65, 113206 (2002).), based on accurate energy bands and wave functions within the screened exchange local density approximation and the full-potential linearized augmented plane wave (FLAPW) method (E.Wimmer, H.Krakauer, M.Weinert, A.J.Freeman, Phys.Rev.B 24, 864 (1981)). Results are presented for electron- and hole-initiated impact ionization processes and Auger recombinations for p-type and n-type InGaAs. Anisotropy and composition effects in the related rates are discussed in terms of the underlying band-structures. Calculated Auger lifetimes, in general agreement with experiments, are studied for different recombination mechanisms (i.e. CCCH, CHHL, CHHS, involving conduction electrons (C), heavy- (H) and light-hole (L), spin split-off (S) band) in order to understand the dominant mechanism.
Dalzini, Annalisa; Bergamini, Christian; Biondi, Barbara; De Zotti, Marta; Panighel, Giacomo; Fato, Romana; Peggion, Cristina; Bortolus, Marco; Maniero, Anna Lisa
2016-01-01
Peptaibols are peculiar peptides produced by fungi as weapons against other microorganisms. Previous studies showed that peptaibols are promising peptide-based drugs because they act against cell membranes rather than a specific target, thus lowering the possibility of the onset of multi-drug resistance, and they possess non-coded α-amino acid residues that confer proteolytic resistance. Trichogin GA IV (TG) is a short peptaibol displaying antimicrobial and cytotoxic activity. In the present work, we studied thirteen TG analogues, adopting a multidisciplinary approach. We showed that the cytotoxicity is tuneable by single amino-acids substitutions. Many analogues maintain the same level of non-selective cytotoxicity of TG and three analogues are completely non-toxic. Two promising lead compounds, characterized by the introduction of a positively charged unnatural amino-acid in the hydrophobic face of the helix, selectively kill T67 cancer cells without affecting healthy cells. To explain the determinants of the cytotoxicity, we investigated the structural parameters of the peptides, their cell-binding properties, cell localization, and dynamics in the membrane, as well as the cell membrane composition. We show that, while cytotoxicity is governed by the fine balance between the amphipathicity and hydrophobicity, the selectivity depends also on the expression of negatively charged phospholipids on the cell surface. PMID:27039838
Application of genetic algorithms to tuning fuzzy control systems
NASA Technical Reports Server (NTRS)
Espy, Todd; Vombrack, Endre; Aldridge, Jack
1993-01-01
Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.
Kurtz, S.; Wanlass, M.; Kramer, C.; Young, M.; Geisz, J.; Ward, S.; Duda, A.; Moriarty, T.; Carapella, J.; Ahrenkiel, P.; Emery. K.; Jones, K.; Romero, M.; Kibbler, A.; Olson, J.; Friedman, D.; McMahon, W.; Ptak, A.
2005-11-01
GaInP/GaAs/GaInAs three-junction cells are grown in an inverted configuration on GaAs, allowing high quality growth of the lattice matched GaInP and GaAs layers before a grade is used for the 1-eV GaInAs layer. Using this approach an efficiency of 37.9% was demonstrated.
Vertical and lateral flight optimization algorithm and missed approach cost calculation
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Flight trajectory optimization is being looked as a way of reducing flight costs, fuel burned and emissions generated by the fuel consumption. The objective of this work is to find the optimal trajectory between two points. To find the optimal trajectory, the parameters of weight, cost index, initial coordinates, and meteorological conditions along the route are provided to the algorithm. This algorithm finds the trajectory where the global cost is the most economical. The global cost is a compromise between fuel burned and flight time, this is determined using a cost index that assigns a cost in terms of fuel to the flight time. The optimization is achieved by calculating a candidate optimal cruise trajectory profile from all the combinations available in the aircraft performance database. With this cruise candidate profile, more cruises profiles are calculated taken into account the climb and descend costs. During cruise, step climbs are evaluated to optimize the trajectory. The different trajectories are compared and the most economical one is defined as the optimal vertical navigation profile. From the optimal vertical navigation profile, different lateral routes are tested. Taking advantage of the meteorological influence, the algorithm looks for the lateral navigation trajectory where the global cost is the most economical. That route is then selected as the optimal lateral navigation profile. The meteorological data was obtained from environment Canada. The new way of obtaining data from the grid from environment Canada proposed in this work resulted in an important computation time reduction compared against other methods such as bilinear interpolation. The algorithm developed here was evaluated in two different aircraft: the Lockheed L-1011 and the Sukhoi Russian regional jet. The algorithm was developed in MATLAB, and the validation was performed using Flight-Sim by Presagis and the FMS CMA-9000 by CMC Electronics -- Esterline. At the end of this work a
Three-class classification models of logS and logP derived by using GA-CG-SVM approach.
Zhang, Hui; Xiang, Ming-Li; Ma, Chang-Ying; Huang, Qi; Li, Wei; Xie, Yang; Wei, Yu-Quan; Yang, Sheng-Yong
2009-05-01
In this investigation, three-class classification models of aqueous solubility (logS) and lipophilicity (logP) have been developed by using a support vector machine (SVM) method combined with a genetic algorithm (GA) for feature selection and a conjugate gradient method (CG) for parameter optimization. A 5-fold cross-validation and an independent test set method were used to evaluate the SVM classification models. For logS, the overall prediction accuracy is 87.1% for training set and 90.0% for test set. For logP, the overall prediction accuracy is 81.0% for training set and 82.0% for test set. In general, for both logS and logP, the prediction accuracies of three-class models are slightly lower by several percent than those of two-class models. A comparison between the performance of GA-CG-SVM models and that of GA-SVM models shows that the SVM parameter optimization has a significant impact on the quality of SVM classification model.
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach
NASA Astrophysics Data System (ADS)
Schumacher, Thomas; Straub, Daniel; Higgins, Christopher
2012-09-01
Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.
Armañanzas, Rubén; Saeys, Yvan; Inza, Iñaki; García-Torres, Miguel; Bielza, Concha; van de Peer, Yves; Larrañaga, Pedro
2011-01-01
Progress is continuously being made in the quest for stable biomarkers linked to complex diseases. Mass spectrometers are one of the devices for tackling this problem. The data profiles they produce are noisy and unstable. In these profiles, biomarkers are detected as signal regions (peaks), where control and disease samples behave differently. Mass spectrometry (MS) data generally contain a limited number of samples described by a high number of features. In this work, we present a novel class of evolutionary algorithms, estimation of distribution algorithms (EDA), as an efficient peak selector in this MS domain. There is a trade-of f between the reliability of the detected biomarkers and the low number of samples for analysis. For this reason, we introduce a consensus approach, built upon the classical EDA scheme, that improves stability and robustness of the final set of relevant peaks. An entire data workflow is designed to yield unbiased results. Four publicly available MS data sets (two MALDI-TOF and another two SELDI-TOF) are analyzed. The results are compared to the original works, and a new plot (peak frequential plot) for graphically inspecting the relevant peaks is introduced. A complete online supplementary page, which can be found at http://www.sc.ehu.es/ccwbayes/members/ruben/ms, includes extended info and results, in addition to Matlab scripts and references.
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665
Hughes, James Alexander; Houghten, Sheridan; Ashlock, Daniel
2016-12-01
DNA Fragment assembly - an NP-Hard problem - is one of the major steps in of DNA sequencing. Multiple strategies have been used for this problem, including greedy graph-based algorithms, deBruijn graphs, and the overlap-layout-consensus approach. This study focuses on the overlap-layout-consensus approach. Heuristics and computational intelligence methods are combined to exploit their respective benefits. These algorithm combinations were able to produce high quality results surpassing the best results obtained by a number of competitive algorithms specially designed and tuned for this problem on thirteen of sixteen popular benchmarks. This work also reinforces the necessity of using multiple search strategies as it is clearly observed that algorithm performance is dependent on problem instance; without a deeper look into many searches, top solutions could be missed entirely.
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
Bogle, Lee B; Boyd, Jeff J; McLaughlin, Kyle A
2010-03-01
As winter backcountry activity increases, so does exposure to avalanche danger. A complicated situation arises when multiple victims are caught in an avalanche and where medical and other rescue demands overwhelm resources in the field. These mass casualty incidents carry a high risk of morbidity and mortality, and there is no recommended approach to patient care specific to this setting other than basic first aid principles. The literature is limited with regard to triaging systems applicable to avalanche incidents. In conjunction with the development of an electronic avalanche rescue training module by the Canadian Avalanche Association, we have designed the Avalanche Survival Optimizing Rescue Triage algorithm to address the triaging of multiple avalanche victims to optimize survival and disposition decisions.
NASA Astrophysics Data System (ADS)
Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi
2013-02-01
The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Nakashima, Megan O.
2014-01-01
Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results. PMID:25025009
Shang, J.S.; Andrienko, D.A.; Huang, P.G.; Surzhikov, S.T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical–physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss–Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Exponential Gaussian approach for spectral modelling: The EGO algorithm II. Band asymmetry
NASA Astrophysics Data System (ADS)
Pompilio, Loredana; Pedrazzi, Giuseppe; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.
2010-08-01
The present investigation is complementary to a previous paper which introduced the EGO approach to spectral modelling of reflectance measurements acquired in the visible and near-IR range (Pompilio, L., Pedrazzi, G., Sgavetti, M., Cloutis, E.A., Craig, M.A., Roush, T.L. [2009]. Icarus, 201 (2), 781-794). Here, we show the performances of the EGO model in attempting to account for temperature-induced variations in spectra, specifically band asymmetry. Our main goals are: (1) to recognize and model thermal-induced band asymmetry in reflectance spectra; (2) to develop a basic approach for decomposition of remotely acquired spectra from planetary surfaces, where effects due to temperature variations are most prevalent; (3) to reduce the uncertainty related to quantitative estimation of band position and depth when band asymmetry is occurring. In order to accomplish these objectives, we tested the EGO algorithm on a number of measurements acquired on powdered pyroxenes at sample temperature ranging from 80 up to 400 K. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of band asymmetry on reflectance spectra; (2) the returned set of EGO parameters can suggest the influence of some additional effect other than the electronic transition responsible for the absorption feature; (3) the returned set of EGO parameters can help in estimating the surface temperature of a planetary body; (4) the occurrence of absorptions which are less affected by temperature variations can be mapped for minerals and thus used for compositional estimates. Further work is still required in order to analyze the behaviour of the EGO algorithm with respect to temperature-induced band asymmetry using powdered pyroxene spanning a range of compositions and grain sizes and more complex band shapes.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.
Verdaguer, Marta; Molinos-Senante, María; Poch, Manel
2016-04-01
Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management.
Diffuse lung disease of infancy: a pattern-based, algorithmic approach to histological diagnosis.
Armes, Jane E; Mifsud, William; Ashworth, Michael
2015-02-01
Diffuse lung disease (DLD) of infancy has multiple aetiologies and the spectrum of disease is substantially different from that seen in older children and adults. In many cases, a specific diagnosis renders a dire prognosis for the infant, with profound management implications. Two recently published series of DLD of infancy, collated from the archives of specialist centres, indicate that the majority of their cases were referred, implying that the majority of biopsies taken for DLD of infancy are first received by less experienced pathologists. The current literature describing DLD of infancy takes a predominantly aetiological approach to classification. We present an algorithmic, histological, pattern-based approach to diagnosis of DLD of infancy, which, with the aid of appropriate multidisciplinary input, including clinical and radiological expertise and ancillary diagnostic studies, may lead to an accurate and useful interim report, with timely exclusion of inappropriate diagnoses. Subsequent referral to a specialist centre for confirmatory diagnosis will be dependent on the individual case and the decision of the multidisciplinary team.
What is the current role of algorithmic approaches for diagnosis of Clostridium difficile infection?
Wilcox, Mark H; Planche, Tim; Fang, Ferric C; Gilligan, Peter
2010-12-01
With the recognition of several serious outbreaks of Clostridium difficile infection in the industrialized world coupled with the development of new testing technologies for detection of this organism, there has been renewed interest in the laboratory diagnosis of C. difficile infection. Two factors seem to have driven much of this interest. First, the recognition that immunoassays for detection of C. difficile toxins A and B, for many years the most widely used tests for C. difficile infection diagnosis, were perhaps not as sensitive as previously believed at a time when attributed deaths to C. difficile infections were showing a remarkable rise. Second, the availability of FDA-approved commercial and laboratory-developed PCR assays which could detect toxigenic strains of C. difficile provided a novel and promising testing approach for diagnosing this infection. In this point-counterpoint on the laboratory diagnosis of C. difficile infection, we have asked two experts in C. difficile infection diagnosis, Ferric Fang, who has recently published two articles in the Journal of Clinical Microbiology advocating the use of PCR as a standalone test (see this author's references 12 and 28), and Mark Wilcox, who played a key role in developing the IDSA/SHEA guidelines on Clostridium difficile infection (see Wilcox and Planche's reference 1), along with his colleague, Tim Planche, to address the following question: what is the current role of algorithmic approaches to the diagnosis of C. difficile infection?
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology.
Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems
Chen, Hsin-Chu; Warsi, N.A.; Sameh, A.
1996-12-31
In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).
A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
2000-01-01
Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Musulen, Eva; Blanco, Ignacio; Carrato, Cristina; Fernandez-Figueras, Maria Teresa; Pineda, Marta; Capella, Gabriel; Ariza, Aurelio
2013-03-01
Lynch syndrome (LS), the most frequent form of hereditary colorectal cancer syndrome, is caused by germ-line mutations in the mismatch repair system genes. Recently, a new mechanism involving the epithelial cell adhesion molecule (EPCAM)/TACSTD1 gene has been shown to be responsible in cases with abnormal MSH2 expression. Of interest, 3' exons deletions of the EPCAM gene, which is located upstream of MSH2 in chromosome 2, are associated with MSH2 promoter hypermethylation. EPCAM protein, expressed in epithelial tissues, is encoded by the EPCAM/TACSTD1 gene. Our study's aim was to explore EPCAM expression in colorectal carcinomas of MSH2-associated LS cases to evaluate the usefulness of EPCAM protein expression in the algorithm approach to LS population screening. We included a total of 19 MSH2-negative colorectal carcinomas from 14 different patients in whom we were able to perform a complete germ-line analysis. Nine patients showed a deleterious germ-line mutation that involved the MSH2 gene in 3 instances and the EPCAM gene exon 9 in 6 instances. All patients harboring the EPCAM mutation belonged to the same family. Of the 19 colorectal carcinomas, EPCAM expression loss was seen in only 5 tumors, all of them from patients showing a germ-line EPCAM deletion. Of interest, 6 tumors from 3 different patients carrying the same germ-line EPCAM deletion showed normal EPCAM expression. In conclusion, owing to the high specificity of EPCAM protein expression to identify LS patients carrying an EPCAM deletion, we recommend adding EPCAM immunohistochemistry to the LS diagnostic algorithm in MSH2-negative colorectal carcinoma.
NASA Astrophysics Data System (ADS)
Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi
2015-07-01
On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.
NASA Technical Reports Server (NTRS)
Hoang, TY
1994-01-01
A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).
Improvements in the sensibility of MSA-GA tool using COFFEE objective function
NASA Astrophysics Data System (ADS)
Amorim, A. R.; Zafalon, G. F. D.; Neves, L. A.; Pinto, A. R.; Valêncio, C. R.; Machado, J. M.
2015-01-01
The sequence alignment is one of the most important tasks in Bioinformatics, playing an important role in the sequences analysis. There are many strategies to perform sequence alignment, since those use deterministic algorithms, as dynamic programming, until those ones, which use heuristic algorithms, as Progressive, Ant Colony (ACO), Genetic Algorithms (GA), Simulated Annealing (SA), among others. In this work, we have implemented the objective function COFFEE in the MSA-GA tool, in substitution of Weighted Sum-of-Pairs (WSP), to improve the final results. In the tests, we were able to verify the approach using COFFEE function achieved better results in 81% of the lower similarity alignments when compared with WSP approach. Moreover, even in the tests with more similar sets, the approach using COFFEE was better in 43% of the times.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-01-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth’s earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions. PMID:25901305
ERIC Educational Resources Information Center
Uno, Mariko
2016-01-01
This study investigates the emergence and development of the discourse-pragmatic functions of the Japanese subject markers "wa" and "ga" from a usage-based perspective (Tomasello, 2000). The use of each marker in longitudinal speech data for four Japanese children from 1;0 to 3;1 and their parents available in the CHILDES…
A multi-layer cellular automata approach for algorithmic generation of virtual case studies: VIBe.
Sitzenfrei, R; Fach, S; Kinzel, H; Rauch, W
2010-01-01
Analyses of case studies are used to evaluate new or existing technologies, measures or strategies with regard to their impact on the overall process. However, data availability is limited and hence, new technologies, measures or strategies can only be tested on a limited number of case studies. Owing to the specific boundary conditions and system properties of each single case study, results can hardly be generalized or transferred to other boundary conditions. virtual infrastructure benchmarking (VIBe) is a software tool which algorithmically generates virtual case studies (VCSs) for urban water systems. System descriptions needed for evaluation are extracted from VIBe whose parameters are based on real world case studies and literature. As a result VIBe writes Input files for water simulation software as EPANET and EPA SWMM. With such input files numerous simulations can be performed and the results can be benchmarked and analysed stochastically at a city scale. In this work the approach of VIBe is applied with parameters according to a section of the Inn valley and therewith 1,000 VCSs are generated and evaluated. A comparison of the VCSs with data of real world case studies shows that the real world case studies fit within the parameter ranges of the VCSs. Consequently, VIBe tackles the problem of limited availability of case study data.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.
2016-01-01
The purpose of this study is to develop a fast and convergence proofed CBCT reconstruction framework based on the compressed sensing theory which not only lowers the imaging dose but also is computationally practicable in the busy clinic. We simplified the original mathematical formulation of gradient projection for sparse reconstruction (GPSR) to minimize the number of forward and backward projections for line search processes at each iteration. GPSR based algorithms generally showed improved image quality over the FDK algorithm especially when only a small number of projection data were available. When there were only 40 projections from 360 degree fan beam geometry, the quality of GPSR based algorithms surpassed FDK algorithm within 10 iterations in terms of the mean squared relative error. Our proposed GPSR algorithm converged as fast as the conventional GPSR with a reasonably low computational complexity. The outcomes demonstrate that the proposed GPSR algorithm is attractive for use in real time applications such as on-line IGRT. PMID:27894103
NASA Astrophysics Data System (ADS)
Buiochi, F.; Kiyono, C. Y.; Peréz, N.; Adamowski, J. C.; Silva, E. C. N.
A new systematic and efficient algorithm to obtain the ten complex constants of piezoelectric materials belonging to the 6 mm symmetry class was developed. A finite element method routine was implemented in Matlab using eight-node axisymmetric elements. The algorithm raises the electrical conductance and resistance curves and calculates the quadratic difference between the experimental and numerical curves. Finally, to minimize the difference, an optimization algorithm based on the "Method of Moving Asymptotes" (MMA) is used. The algorithm is able to adjust the curves over a wide frequency range obtaining the real and imaginary parts of the material properties simultaneously.
A new damping factor algorithm based on line search of the local minimum point for inverse approach
NASA Astrophysics Data System (ADS)
Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping
2013-05-01
The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.
Bayesian Approach to Effective Model of NiGa2S4 Triangular Lattice with Boltzmann Factor
NASA Astrophysics Data System (ADS)
Takenaka, Hikaru; Nagata, Kenji; Mizokawa, Takashi; Okada, Masato
2016-12-01
We propose a method for inducting the Boltzmann factor to extract effective classical spin Hamiltonians from mean-field-type electronic structural calculations by means of the Bayesian inference. This method enables us to compare electronic structural calculations with experiments according to the classical model at a finite temperature. Application of this method to the unrestricted Hartree-Fock calculations for NiGa2S4 led to the estimation that the superexchange interaction between the nearest neighbor sites is ferromagnetic at low temperature, which is consistent with magnetic experiment results. This supports the theory that competition between the antiferromagnetic third neighbor interaction and ferromagnetic nearest neighbor interaction may lead to the quantum spin liquid in NiGa2S4.
Phase-unwrapping algorithm by a rounding-least-squares approach
NASA Astrophysics Data System (ADS)
Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin
2014-02-01
A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
Application of fuzzy GA for optimal vibration control of smart cylindrical shells
NASA Astrophysics Data System (ADS)
Jin, Zhanli; Yang, Yaowen; Kiong Soh, Chee
2005-12-01
In this paper, a fuzzy-controlled genetic-based optimization technique for optimal vibration control of cylindrical shell structures incorporating piezoelectric sensor/actuators (S/As) is proposed. The geometric design variables of the piezoelectric patches, including the placement and sizing of the piezoelectric S/As, are processed using fuzzy set theory. The criterion based on the maximization of energy dissipation is adopted for the geometric optimization. A fuzzy-rule-based system (FRBS) representing expert knowledge and experience is incorporated in a modified genetic algorithm (GA) to control its search process. A fuzzy logic integrated GA is then developed and implemented. The results of three numerical examples, which include a simply supported plate, a simply supported cylindrical shell, and a clamped simply supported plate, provide some meaningful and heuristic conclusions for practical design. The results also show that the proposed fuzzy-controlled GA approach is more effective and efficient than the pure GA method.
Optical flow optimization using parallel genetic algorithm
NASA Astrophysics Data System (ADS)
Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe
2011-06-01
A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
1986-10-01
these theorems to find steady-state solutions of Markov chains are analysed. The results obtained in this way are then applied to quasi birth-death processes. Keywords: computations; algorithms; equalibrium equations.
2012-01-01
RA is a syndrome consisting of different pathogenetic subsets in which distinct molecular mechanisms may drive common final pathways. Recent work has provided proof of principle that biomarkers may be identified predictive of the response to targeted therapy. Based on new insights, an initial treatment algorithm is presented that may be used to guide treatment decisions in patients who have failed one TNF inhibitor. Key questions in this algorithm relate to the question whether the patient is a primary vs a secondary non-responder to TNF blockade and whether the patient is RF and/or anti-citrullinated peptide antibody positive. This preliminary algorithm may contribute to more cost-effective treatment of RA, and provides the basis for more extensive algorithms when additional data become available. PMID:21890615
A new approach to optic disc detection in human retinal images using the firefly algorithm.
Rahebi, Javad; Hardalaç, Fırat
2016-03-01
There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.
Symbolic integration of a class of algebraic functions. [by an algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
An algorithm is presented for the symbolic integration of a class of algebraic functions. This class consists of functions made up of rational expressions of an integration variable x and square roots of polynomials, trigonometric and hyperbolic functions of x. The algorithm is shown to consist of the following components:(1) the reduction of input integrands to conical form; (2) intermediate internal representations of integrals; (3) classification of outputs; and (4) reduction and simplification of outputs to well-known functions.
A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray
NASA Astrophysics Data System (ADS)
Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu
2016-12-01
This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.
NASA Astrophysics Data System (ADS)
Dalzell, B. J.; Gassman, P. W.; Kling, C.
2015-12-01
In the Minnesota River Basin, sediments originating from failing stream banks and bluffs account for the majority of the riverine load and contribute to water quality impairments in the Minnesota River as well as portions of the Mississippi River upstream of Lake Pepin. One approach for mitigating this problem may be targeted wetland restoration in Minnesota River Basin tributaries in order to reduce the magnitude and duration of peak flow events which contribute to bluff and stream bank failures. In order to determine effective arrangements and properties of wetlands to achieve peak flow reduction, we are employing a genetic algorithm approach coupled with a SWAT model of the Cottonwood River, a tributary of the Minnesota River. The genetic algorithm approach will evaluate combinations of basic wetland features as represented by SWAT: surface area, volume, contributing area, and hydraulic conductivity of the wetland bottom. These wetland parameters will be weighed against economic considerations associated with land use trade-offs in this agriculturally productive landscape. Preliminary results show that the SWAT model is capable of simulating daily hydrology very well and genetic algorithm evaluation of wetland scenarios is ongoing. Anticipated results will include (1) combinations of wetland parameters that are most effective for reducing peak flows, and (2) evaluation of economic trade-offs between wetland restoration, water quality, and agricultural productivity in the Cottonwood River watershed.
Schirle, M; Weinschenk, T; Stevanović, S
2001-11-01
The identification of T cell epitopes from immunologically relevant antigens remains a critical step in the development of vaccines and methods for monitoring of T cell responses. This review presents an overview of strategies that employ computer algorithms for the selection of candidate peptides from defined proteins and subsequent verification of their in vivo relevance by experimental approaches. Several computer algorithms are currently being used for epitope prediction of various major histocompatibility complex (MHC) class I and II molecules, based either on the analysis of natural MHC ligands or on the binding properties of synthetic peptides. Moreover, the analysis of proteasomal digests of peptides and whole proteins has led to the development of algorithms for the prediction of proteasomal cleavages. In order to verify the generation of the predicted peptides during antigen processing in vivo as well as their immunogenic potential, several experimental approaches have been pursued in the recent past. Mass spectrometry-based bioanalytical approaches have been used specifically to detect predicted peptides among isolated natural ligands. Other strategies employ various methods for the stimulation of primary T cell responses against the predicted peptides and subsequent testing of the recognition pattern towards target cells that express the antigen.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Farzinfar, Mahshid; Teoh, Eam Khwang; Xue, Zhong
2011-11-01
This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.
NASA Technical Reports Server (NTRS)
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2011-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.
Ashby, C.I.H.; Sullivan, J.P.; Newcomer, P.P.
1996-12-31
Three important oxidation regimes have been identified in the temporal evolution of the wet thermal oxidation of Al{sub x}Ga{sub 1-x}As (1 {ge} x {ge} 0.90) on GaAs: (1) oxidation of Al and Ga in the Al{sub x}Ga{sub 1-x}As alloy to form an amorphous oxide layer, (2) oxidative formation and elimination of elemental As (both crystalline and amorphous) and of amorphous As{sub 2}O{sub 3}, and (3) crystallization of the oxide film. Residual As can result in up to a 100-fold increase in leakage current and a 30% increase in the dielectric constant and produce strong Fermi-level pinning and high leakage currents at the oxidized Al{sub x}Ga{sub 1-x}As/GaAs interface. The presence of thermodynamically-favored interfacial As may impose a fundamental limitation on the application of AlGaAs wet oxidation for achieving MIS devices in the GaAs material system.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran
2017-03-01
In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
A Knowledge-based Evolution Algorithm approach to political districting problem
NASA Astrophysics Data System (ADS)
Chou, Chung-I.
2011-01-01
The political districting problem is to study how to partition a comparatively large zone into many minor electoral districts. In our previously works, we have mapped this political problem onto a q-state Potts model system by using statistical physics methods. The political constraints (such as contiguity, population equality, etc.) are transformed to an energy function with interactions between sites or external fields acting on the system. Several optimization algorithms such as simulated annealing method and genetic algorithm have been applied to this problem. In this report, we will show how to apply the Knowledge-based Evolution Algorithm (KEA) to the problem. Our test objects include two real cities (Taipei and Kaohsiung) and the simulated cities. The results showed the KEA can reach the same minimum which has been found by using other methods in each test case.
Gokhale, Nikhil S
2016-01-01
Vernal keratoconjunctivitis is an ocular allergy that is common in the pediatric age group. It is often chronic, severe, and nonresponsive to the available treatment options. Management of these children is difficult and often a dilemma for the practitioner. There is a need to simplify and standardize its management. To achieve this goal, we require a grading system to judge the severity of inflammation and an algorithm to select the appropriate medications. This article provides a simple and practically useful grading system and a stepladder algorithm for systematic treatment of these patients. Use of appropriate treatment modalities can reduce treatment and disease-related complications. PMID:27050351
Cao, H Q; Kang, L S; Guo, T; Chen, Y P; de Garis, H
2000-01-01
This paper presents a new algorithm for modeling one-dimensional (1-D) dynamic systems by higher-order ordinary differential equation (HODE) models instead of the ARMA models as used in traditional time series analysis. A two-level hybrid evolutionary modeling algorithm (THEMA) is used to approach the modeling problem of HODE's for dynamic systems. The main idea of this modeling algorithm is to embed a genetic algorithm (GA) into genetic programming (GP), where GP is employed to optimize the structure of a model (the upper level), while a GA is employed to optimize the parameters of the model (the lower level). In the GA, we use a novel crossover operator based on a nonconvex linear combination of multiple parents which works efficiently and quickly in parameter optimization tasks. Two practical examples of time series are used to demonstrate the THEMA's effectiveness and advantages.
Lee, Ming-Lun; Yeh, Yu-Hsiang; Tu, Shang-Ju; Chen, P C; Lai, Wei-Chih; Sheu, Jinn-Kong
2015-04-06
Non-planar InGaN/GaN multiple quantum well (MQW) structures are grown on a GaN template with truncated hexagonal pyramids (THPs) featuring c-plane and r-plane surfaces. The THP array is formed by the regrowth of the GaN layer on a selective-area Si-implanted GaN template. Transmission electron microscopy shows that the InGaN/GaN epitaxial layers regrown on the THPs exhibit different growth rates and indium compositions of the InGaN layer between the c-plane and r-plane surfaces. Consequently, InGaN/GaN MQW light-emitting diodes grown on the GaN THP array emit multiple wavelengths approaching near white light.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Premaladha, J; Ravichandran, K S
2016-04-01
Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.
Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction
NASA Technical Reports Server (NTRS)
Velusamy, T.; Marsh, K. A.; Ware, B.
2005-01-01
TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.
Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin
2016-01-01
Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes.
An ab initio approach on superconducting properties of Mo3X(X = Si,Ga,Ge) compounds
NASA Astrophysics Data System (ADS)
Subhashree, G.; Sankar, S.; Krithiga, R.
2015-06-01
Self-consistent first principles calculations on type II weakly coupled superconducting Mo3X(X = Si,Ga,Ge) compounds of A15 phase are performed to understand their fundamental characteristics of the electronic, thermal and superconducting properties. The bulk modulus (B), Debye temperature (θD), density of states (DOS) (N(EF)), electron-phonon coupling constant (λ), superconducting transition temperature (Tc), and electronic specific heat coefficient (γ) have been computed in terms of the electronic structure results, obtained by using the tight-binding linear muffin-tin orbital method. It is observed that all the three materials have their electronic properties dominated by d-orbital at Fermi energy. Thermal and superconducting properties calculated here are found to corroborate well with the experimental results of literature.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl; Hejlesen, Ole
2014-07-01
The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-12-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
NASA Astrophysics Data System (ADS)
Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.
2016-12-01
This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.
Actuator Placement Via Genetic Algorithm for Aircraft Morphing
NASA Technical Reports Server (NTRS)
Crossley, William A.; Cook, Andrea M.
2001-01-01
This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.
Raman Spectroscopy on GaAs/GaP Nanowire Axial Heterostructures
NASA Astrophysics Data System (ADS)
Wang, Yuda; Montazari, Mohammad; Smith, Leigh; Jackson, Howard; Yarrison-Rice, Jan; Gao, Qiang; Kang, Jung-Hyun; Jagadish, Chennupati
2013-03-01
We use Raman scattering to study the spatially-resolved strain and stress in Zinc Blende GaAs/GaP axial heterostructure nanowires at room temperature. The nanowires are grown by Metal-Organic Chemical Vapor Deposition in the [111] direction with Au nano particles as catalysts. After initial growth of a 6 μm-long GaP wire, a short GaAs segment is grown. Since Raman scattering reflects the phonon energies that are in turn related to the stress, we control the polarization of the incident and scattered light to acquire and resolve the TO1 (Transverse Optical) and TO2 phonon modes of both GaAs and GaP. High spatial resolution Raman scans along the nanowires show that the GaAs/GaP interface is clearly identifiable. Within the GaP section of the wire, GaP TO modes are observed at lower energies compared to bulk GaP since it is under tension, while GaAs shell TO modes are at higher energies than bulk GaAs since it is under compression. A strain gradient exists across the interface so the GaP phonon energies shift to lower and GaAs phonon shift to higher energies as one approaches the interface. We acknowledge the NSF through DMR-1105362, 1105121 and ECCS-1100489, and the ARC.
Modelling Aṣṭādhyāyī: An Approach Based on the Methodology of Ancillary Disciplines (Vedāṅga)
NASA Astrophysics Data System (ADS)
Mishra, Anand
This article proposes a general model based on the common methodological approach of the ancillary disciplines (Vedāṅga) associated with the Vedas taking examples from Śikṣā, Chandas, Vyākaraṇa and Prātiśā khya texts. It develops and elaborates this model further to represent the contents and processes of Aṣṭādhyāyī. Certain key features are added to my earlier modelling of Pāṇinian system of Sanskrit grammar. This includes broader coverage of the Pāṇinian meta-language, mechanism for automatic application of rules and positioning the grammatical system within the procedural complexes of ancillary disciplines.
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.
1975-01-01
Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.
NASA Astrophysics Data System (ADS)
Siragusa, R.; Perret, E.; Nguyen, H. V.; Lemaître-Auger, P.; Tedjini, S.; Caloz, C.
2011-06-01
A fully automated tool for designing CRLH interdigital microstrip structures using a co-design synthesis computational approach is proposed and demonstrated experimentally. This approach uses an electromagnetic simulator in conjunction with a genetic algorithm to synthesize and optimize a balanced CRLH interdigital microstrip transmission line. The high sensitivity of a long balanced transmission line to fabrication tolerances is controlled by the use of a high precision 3D simulator. The 2.5D simulator used was found insufficient for a large number of unit cells. A 13 UC CRLH transmission line is designed with the proposed approach. The response sensitivity of the balanced transmission lines to the over/under-etching factor is highlighted by comparing the measurements of four lines with different factors. The effect of over/under-etching is significant for values larger than 10 μm.
Dhodiya, Jayesh M; Tailor, Anita Ravi
2016-01-01
This paper presents a genetic algorithm based hybrid approach for solving a fuzzy multi-objective assignment problem (FMOAP) by using an exponential membership function in which the coefficient of the objective function is described by a triangular possibility distribution. Moreover, in this study, fuzzy judgment was classified using α-level sets for the decision maker (DM) to simultaneously optimize the optimistic, most likely, and pessimistic scenarios of fuzzy objective functions. To demonstrate the effectiveness of the proposed approach, a numerical example is provided with a data set from a realistic situation. This paper concludes that the developed hybrid approach can manage FMOAP efficiently and effectively with an effective output to enable the DM to take a decision.
Thermoluminescence curves simulation using genetic algorithm with factorial design
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-05-01
The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
NASA Astrophysics Data System (ADS)
Riha, Stefan; Krawczyk, Harald
2011-11-01
Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical
NASA Astrophysics Data System (ADS)
Dong, S.
2014-06-01
We present an effective outflow boundary condition, and an associated numerical algorithm, within the phase-field framework for dealing with two-phase outflows or open boundaries. The set of two-phase outflow boundary conditions for the phase-field and flow variables are designed to prevent the un-controlled growth in the total energy of the two-phase system, even in situations where strong backflows or vortices may be present at the outflow boundaries. We also present an additional boundary condition for the phase field function, which together with the usual Dirichlet condition can work effectively as the phase-field inflow conditions. The numerical algorithm for dealing with these boundary conditions is developed on top of a strategy for de-coupling the computations of all flow variables and for overcoming the performance bottleneck caused by variable coefficient matrices associated with variable density/viscosity. The algorithm contains special constructions, for treating the variable dynamic viscosity in the outflow boundary condition, and for preventing a numerical locking at the outflow boundaries for time-dependent problems. Extensive numerical tests with incompressible two-phase flows involving inflow and outflow boundaries demonstrate that, the two-phase outflow boundary conditions and the numerical algorithm developed herein allow for the fluid interface and the two-phase flow to pass through the outflow or open boundaries in a smooth and seamless fashion, and that our method produces stable simulations when large density ratios and large viscosity ratios are involved and when strong backflows are present at the outflow boundaries.
An algorithmic approach to diagnosing asthma in older patients in general practice.
Ruffin, Richard E; Wilson, David H; Appleton, Sarah L; Adams, Robert J
2005-07-04
WHAT WE NEED TO KNOW: How effective would an algorithm be in helping general practitioners diagnose asthma? What proportion of older people with undiagnosed asthma fail to recognise symptoms? What proportion of the population believe asthma does not occur in the older population? What systems or supports do GPs need to diagnose asthma more effectively? WHAT WE NEED TO DO: Work on developing a gold standard for asthma diagnosis. Develop prototype algorithms for general practice discussion. Conduct a general practice study to assess the effectiveness of an algorithm. In conjunction with GPs, develop a pilot program to increase awareness of the current asthma problem. Conduct focus-group research to identify why some people do not believe they can develop asthma for the first time in adult life. Conduct focus-group research to identify why some adults do not attribute asthma symptoms to asthma. Conduct focus groups with GPs to identify what support is needed to diagnose asthma more effectively. Consult with all stakeholders before an intervention is used. Evaluate any interventions used.
NASA Astrophysics Data System (ADS)
Ajoy, Ashok; Rao, Rama Koteswara; Kumar, Anil; Rungta, Pranaw
2012-03-01
We propose an iterative algorithm to simulate the dynamics generated by any n-qubit Hamiltonian. The simulation entails decomposing the unitary time evolution operator U (unitary) into a product of different time-step unitaries. The algorithm product-decomposes U in a chosen operator basis by identifying a certain symmetry of U that is intimately related to the number of gates in the decomposition. We illustrate the algorithm by first obtaining a polynomial decomposition in the Pauli basis of the n-qubit quantum state transfer unitary by Di Franco [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.101.230502 101, 230502 (2008)] that transports quantum information from one end of a spin chain to the other, and then implement it in nuclear magnetic resonance to demonstrate that the decomposition is experimentally viable. We further experimentally test the resilience of the state transfer to static errors in the coupling parameters of the simulated Hamiltonian. This is done by decomposing and simulating the corresponding imperfect unitaries.
Utilisation of GaN and InGaN/GaN with nanoporous structures for water splitting
Benton, J.; Bai, J.; Wang, T.
2014-12-01
We report a cost-effective approach to the fabrication of GaN based nanoporous structure for applications in renewable hydrogen production. Photoelectrochemical etching in a KOH solution has been employed to fabricate both GaN and InGaN/GaN nanoporous structures with pore sizes ranging from 25 to 60 nm, obtained by controlling both etchant concentration and applied voltage. Compared to as-grown planar devices the nanoporous structures have exhibited a significant increase of photocurrent with a factor of up to four times. An incident photon conversion efficiency of up to 46% around the band edge of GaN has been achieved.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823
Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
NASA Astrophysics Data System (ADS)
Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong
2015-07-01
The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.
Parvini, Farid; Shahabi, Cyrus
2007-01-01
We propose a novel approach for recognising static and dynamic hand gestures by analysing the raw data streams generated by the sensors attached to the human hands. We utilise the concept of 'range of motion' in the movement of fingers and exploit this characteristic to analyse the acquired data for recognising hand signs. Our approach for hand gesture recognition addresses two major problems: user-dependency and device-dependency. Furthermore, we show that our approach neither requires calibration nor involves training. We apply our approach for recognising American Sign Language (ASL) signs and show that more than 75% accuracy in sign recognition can be achieved.
Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.
Rani, R Ranjani; Ramyachitra, D
2016-12-01
Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods.
A guided search genetic algorithm using mined rules for optimal affective product design
NASA Astrophysics Data System (ADS)
Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.
2014-08-01
Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.
A Hybrid Approach for Process Mining: Using From-to Chart Arranged by Genetic Algorithms
NASA Astrophysics Data System (ADS)
Esgin, Eren; Senkul, Pinar; Cimenbicer, Cem
In the scope of this study, a hybrid data analysis methodology to business process modeling is proposed in such a way that; From-to Chart, which is basically used as the front-end to figure out the observed patterns among the activities at realistic event logs, is rearranged by Genetic Algorithms to convert these derived raw relations into activity sequence. According to experimental results, acceptably good (sub-optimal or optimal) solutions are obtained for relatively complex business processes at a reasonable processing time period.
Achieving Direct Closure of the Anterolateral Thigh Flap Donor Site—An Algorithmic Approach
Pachón Suárez, Jaime Eduardo; Sadigh, Parviz Lionel; Shih, Hsiang-Shun; Hsieh, Ching-Hua
2014-01-01
Background: Minimizing donor-site morbidity after free flap harvest is of paramount importance. In this article, we share our experience with achieving primary closure of 58 anterolateral thigh (ALT) free flap donor sites using a simple algorithm in cases where primary closure would otherwise have not been possible. Methods: Between 2004 and 2010, 58 patients who underwent free ALT flap reconstruction were included in the study. The inclusion criteria were those who had flap width requirements that were wider than 16% of the thigh circumference and had achieved direct primary closure of the donor site by the use of our technique. Results: Primary closure of the donor sites was facilitated in all cases by the use of 3 distinct techniques. This included the use of the V-Y advancement technique in 13 patients, split skin paddle technique in 7 patients, and the tubed skin paddle design in 38 patients. No episodes of postoperative wound dehiscence at the donor site were encountered; however, 2 cases were complicated by superficial wound infections that settled with a course of antibiotics. Conclusions: Direct primary closure of the ALT donor site can be facilitated by the use of our simple algorithm. Certain strategies need to be adopted at the design stage; however, the techniques used are simple and reliable, produce superior cosmetic results at the donor site, save time, and spare the patient the morbidity associated with the harvest of a skin graft. PMID:25426349
Dirschka, Thomas; Gupta, Girish; Micali, Giuseppe; Stockfleth, Eggert; Basset-Séguin, Nicole; Del Marmol, Véronique; Dummer, Reinhard; Jemec, Gregor B E; Malvehy, Josep; Peris, Ketty; Puig, Susana; Stratigos, Alexander J; Zalaudek, Iris; Pellacani, Giovanni
2016-11-13
Actinic keratosis (AK) is a chronic skin disease in which multiple clinical and subclinical lesions co-exist across large areas of sun-exposed skin, resulting in field cancerisation. Lesions require treatment because of their potential to transform into invasive squamous cell carcinoma. This article aims to provide office-based dermatologists and general practitioners with simple guidance on AK treatment in daily clinical practice to supplement existing evidence-based guidelines. Novel aspects of the proposed treatment algorithm include differentiating patients according to whether they have isolated scattered lesions, lesions clustered in small areas or large affected fields without reference to specific absolute numbers of lesions. Recognising that complete lesion clearance is rarely achieved in real-life practice and that AK is a chronic disease, the suggested treatment goals are to reduce the number of lesions, to achieve long-term disease control and to prevent disease progression to invasive squamous cell carcinoma. In the clinical setting, physicians should select AK treatments based on local availability, and the presentation and needs of their patients. The proposed AK treatment algorithm is easy-to-use and has high practical relevance for real-life, office-based dermatology.
NASA Astrophysics Data System (ADS)
Wang, Xiaojun; Lai, Weidong
2011-08-01
In this paper, a combined method have been put forward for one ASTER detected image with the wavelet filter to attenuate the noise and the anisotropic diffusion PDE(Partial Differential Equation) for further recovering image contrast. The model is verified in different noising background, since the remote sensing image usually contains salt and pepper, Gaussian as well as speckle noise. Considered the features that noise existing in wavelet domain, the wavelet filter with Bayesian estimation threshold is applied for recovering image contrast from the blurring background. The proposed PDE are performing an anisotropic diffusion in the orthogonal direction, thus preserving the edges during further denoising process. Simulation indicates that the combined algorithm can more effectively recover the blurred image from speckle and Gauss noise background than the only wavelet denoising method, while the denoising effect is also distinct when the pepper-salt noise has low intensity. The combined algorithm proposed in this article can be integrated in remote sensing image analyzing to obtain higher accuracy for environmental interpretation and pattern recognition.
Algorithmic approach to quantifying the hydrophobic force contribution in protein folding.
Backofen, R; Will, S; Clote, P
2000-01-01
Though the electrostatic, ionic, van der Waals, Lennard-Jones, hydrogen bonding, and other forces play an important role in the energy function minimized at a protein's native state, it is widely believed that the hydrophobic force is the dominant term in protein folding. Here we attempt to quantify the extent to which the hydrophobic force determines the positions of the backbone alpha-carbon atoms in PDB data, by applying Monte-Carlo and genetic algorithms to determine the predicted conformation with minimum energy, where only the hydrophobic force is considered (i.e. Dill's HP-model, and refinements using Woese's polar requirement). This is done by computing the root mean square deviation between the normalized distance matrix D = (di,j) (di,j is normalized Euclidean distance between residues ri and rj) for PDB data with that obtained from the output of our algorithms. Our program was run on the database of ancient conserved regions drawn from GenBank 101 generously supplied by W. Gilbert's lab, as well as medium-sized proteins (E. Coli RecA, 2reb, Erythrocruorin, 1eca, and Actinidin 2act). The root mean square deviation (RMSD) between distance matrices derived from the PDB data and from our program output is quite small, and by comparison with RMSD between PDB data and random coils, allows a quantification of the hydrophobic force contribution. A preliminary version of this paper appeared at GCB'99 (http:¿bibiserv.techfak.uni-bielefeld.de/gcb9 9/).
Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin
2009-10-01
In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.
Leckenby, J I; Ghali, S; Butler, D P; Grobbelaar, A O
2015-05-01
Facial palsy patients suffer an array of problems ranging from functional to psychological issues. With regard to the eye, lacrimation, lagophthalmos and the inability to spontaneously blink are the main symptoms and if left untreated can compromise the cornea and vision. There are a multitude of treatment modalities available and the surgeon has the challenging prospect of choosing the correct intervention to yield the best outcome for a patient. The accurate assessment of the eye in facial paralysis is described and by approaching the brow and the eye separately the treatment options and indications are discussed having been broken down into static and dynamic modalities. Based on our unit's experience of more than 35 years and 1000 cases of facial palsy, we have developed a detailed approach to help manage these patients optimally. The aim of this article is to provide the reader with a systematic algorithm that can be used when consulting a patient with eye problems associated with facial palsy.
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.
Anor, Tomer; Madsen, Joseph R; Dupont, Pierre
2011-05-09
We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples.
NASA Astrophysics Data System (ADS)
Lai, Xide; Chen, Xiaoming; Zhang, Xiang; Lei, Mingchuan
2016-11-01
This paper presents an approach to automatic hydraulic optimization of hydraulic machine's blade system combining a blade geometric modeller and parametric generator with automatic CFD solution procedure and multi-objective genetic algorithm. In order to evaluate a plurality of design options and quickly estimate the blade system's hydraulic performance, the approximate model which is able to substitute for the original inside optimization loop has been employed in the hydraulic optimization of blade by using function approximation. As the approximate model is constructed through the database samples containing a set of blade geometries and their resulted hydraulic performances, it can ensure to correctly imitate the real blade's performances predicted by the original model. As hydraulic machine designers are accustomed to do design with 2D blade profiles on stream surface that are then stacked to 3D blade geometric model in the form of NURBS surfaces, geometric variables to be optimized were defined by a series profiles on stream surfaces. The approach depends on the cooperation between a genetic algorithm, a database and user defined objective functions and constraints which comprises hydraulic performances, structural and geometric constraint functions. Example covering optimization design of a mixed-flow pump impeller is presented.
NASA Astrophysics Data System (ADS)
Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.
2016-05-01
Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.
InGaN quantum dot formation mechanism on hexagonal GaN/InGaN/GaN pyramids.
Lundskog, A; Palisaitis, J; Hsu, C W; Eriksson, M; Karlsson, K F; Hultman, L; Persson, P O Å; Forsberg, U; Holtz, P O; Janzén, E
2012-08-03
Growing InGaN quantum dots (QDs) at the apex of hexagonal GaN pyramids is an elegant approach to achieve a deterministic positioning of QDs. Despite similar synthesis procedures by metal organic chemical vapor deposition, the optical properties of the QDs reported in the literature vary drastically. The QDs tend to exhibit either narrow or broad emission lines in the micro-photoluminescence spectra. By coupled microstructural and optical investigations, the QDs giving rise to narrow emission lines were concluded to nucleate in association with a (0001) facet at the apex of the GaN pyramid.
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Chae, Han Gil
Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the
Inter-residue spatial distance map prediction by using integrating GA with RBFNN.
Zhang, Guang-Zheng; Huang, De-Shuang
2004-12-01
The spatial ordering information of amino acid residue in protein primary sequence is an important determinant of protein three-dimensional structure. In this paper, we describe a radial basis function neural network (RBFNN), whose hidden centers and basis function widths are optimized by a genetic algorithm (GA), for the purpose of predicting three dimensional spatial distance location from primary sequence information. Experimental evidence on soybean protein sequences indicates the utility of this approach.
NASA Astrophysics Data System (ADS)
Benard, N.; Pons-Prats, J.; Periaux, J.; Bugeda, G.; Braud, P.; Bonnet, J. P.; Moreau, E.
2016-02-01
The potential benefits of active flow control are no more debated. Among many others applications, flow control provides an effective mean for manipulating turbulent separated flows. Here, a nonthermal surface plasma discharge (dielectric barrier discharge) is installed at the step corner of a backward-facing step ( U 0 = 15 m/s, Re h = 30,000, Re θ = 1650). Wall pressure sensors are used to estimate the reattaching location downstream of the step (objective function #1) and also to measure the wall pressure fluctuation coefficients (objective function #2). An autonomous multi-variable optimization by genetic algorithm is implemented in an experiment for optimizing simultaneously the voltage amplitude, the burst frequency and the duty cycle of the high-voltage signal producing the surface plasma discharge. The single-objective optimization problems concern alternatively the minimization of the objective function #1 and the maximization of the objective function #2. The present paper demonstrates that when coupled with the plasma actuator and the wall pressure sensors, the genetic algorithm can find the optimum forcing conditions in only a few generations. At the end of the iterative search process, the minimum reattaching position is achieved by forcing the flow at the shear layer mode where a large spreading rate is obtained by increasing the periodicity of the vortex street and by enhancing the vortex pairing process. The objective function #2 is maximized for an actuation at half the shear layer mode. In this specific forcing mode, time-resolved PIV shows that the vortex pairing is reduced and that the strong fluctuations of the wall pressure coefficients result from the periodic passages of flow structures whose size corresponds to the height of the step model.
Shimray, Benjamin A; Singh, Kh Manglem; Khelchandra, Thongam; Mehta, R K
2017-01-01
Every energy system which we consider is an entity by itself, defined by parameters which are interrelated according to some physical laws. In recent year tremendous importance is given in research on site selection in an imprecise environment. In this context, decision making for the suitable location of power plant installation site is an issue of relevance. Environmental impact assessment is often used as a legislative requirement in site selection for decades. The purpose of this current work is to develop a model for decision makers to rank or classify various power plant projects according to multiple criteria attributes such as air quality, water quality, cost of energy delivery, ecological impact, natural hazard, and project duration. The case study in the paper relates to the application of multilayer perceptron trained by genetic algorithm for ranking various power plant locations in India.
Singh, Kh. Manglem; Khelchandra, Thongam; Mehta, R. K.
2017-01-01
Every energy system which we consider is an entity by itself, defined by parameters which are interrelated according to some physical laws. In recent year tremendous importance is given in research on site selection in an imprecise environment. In this context, decision making for the suitable location of power plant installation site is an issue of relevance. Environmental impact assessment is often used as a legislative requirement in site selection for decades. The purpose of this current work is to develop a model for decision makers to rank or classify various power plant projects according to multiple criteria attributes such as air quality, water quality, cost of energy delivery, ecological impact, natural hazard, and project duration. The case study in the paper relates to the application of multilayer perceptron trained by genetic algorithm for ranking various power plant locations in India. PMID:28331490
NASA Astrophysics Data System (ADS)
Keilis-Borok, V. I.; Soloviev, A.; Gabrielov, A.
2011-12-01
We describe a uniform approach to predicting different extreme events, also known as critical phenomena, disasters, or crises. The following types of such events are considered: strong earthquakes; economic recessions (their onset and termination); surges of unemployment; surges of crime; and electoral changes of the governing party. A uniform approach is possible due to the common feature of these events: each of them is generated by a certain hierarchical dissipative complex system. After a coarse-graining, such systems exhibit regular behavior patterns; we look among them for "premonitory patterns" that signal the approach of an extreme event. We introduce methodology, based on the optimal control theory, assisting disaster management in choosing optimal set of disaster preparedness measures undertaken in response to a prediction. Predictions with their currently realistic (limited) accuracy do allow preventing a considerable part of the damage by a hierarchy of preparedness measures. Accuracy of prediction should be known, but not necessarily high.
GaAsP solar cells on GaP/Si with low threading dislocation density
NASA Astrophysics Data System (ADS)
Yaung, Kevin Nay; Vaisman, Michelle; Lang, Jordan; Lee, Minjoo Larry
2016-07-01
GaAsP on Si tandem cells represent a promising path towards achieving high efficiency while leveraging the Si solar knowledge base and low-cost infrastructure. However, dislocation densities exceeding 108 cm-2 in GaAsP cells on Si have historically hampered the efficiency of such approaches. Here, we report the achievement of low threading dislocation density values of 4.0-4.6 × 106 cm-2 in GaAsP solar cells on GaP/Si, comparable with more established metamorphic solar cells on GaAs. Our GaAsP solar cells on GaP/Si exhibit high open-circuit voltage and quantum efficiency, allowing them to significantly surpass the power conversion efficiency of previous devices. The results in this work show a realistic path towards dual-junction GaAsP on Si cells with efficiencies exceeding 30%.
NASA Technical Reports Server (NTRS)
Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)
2002-01-01
This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.
Phase Reconstruction from FROG Using Genetic Algorithms[Frequency-Resolved Optical Gating
Omenetto, F.G.; Nicholson, J.W.; Funk, D.J.; Taylor, A.J.
1999-04-12
The authors describe a new technique for obtaining the phase and electric field from FROG measurements using genetic algorithms. Frequency-Resolved Optical Gating (FROG) has gained prominence as a technique for characterizing ultrashort pulses. FROG consists of a spectrally resolved autocorrelation of the pulse to be measured. Typically a combination of iterative algorithms is used, applying constraints from experimental data, and alternating between the time and frequency domain, in order to retrieve an optical pulse. The authors have developed a new approach to retrieving the intensity and phase from FROG data using a genetic algorithm (GA). A GA is a general parallel search technique that operates on a population of potential solutions simultaneously. Operators in a genetic algorithm, such as crossover, selection, and mutation are based on ideas taken from evolution.
An Intelligent Model for Pairs Trading Using Genetic Algorithms.
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
An Intelligent Model for Pairs Trading Using Genetic Algorithms
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236
The Scalar Relativistic Contribution to Ga-Halide Bond Energies
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1998-01-01
The one-electron Douglas Kroll (DK) and perturbation theory (+R) approaches are used to compute the scalar relativistic contribution to the atomization energies of GaFn. These results are compared with the previous GaCln results. While the +R and DK results agree well for the GaCln atom nation energies, they differ for GaFn. The present work suggests that the DK approach is more accurate than the +R approach. In addition, the DK approach is less sensitive to the choice of basis set. The computed atomization energies of GaF2 and GaF3 are smaller than the somewhat uncertain experiments. It is suggested that additional calibration calculations for the scalar relativistic effects in GaF2 and GaF3 would be valuable.
Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto
2010-03-01
This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.
Life-histories from Landsat: Algorithmic approaches to distilling Earth's recent ecological dynamics
NASA Astrophysics Data System (ADS)
Kennedy, R. E.; Yang, Z.; Braaten, J.; Cohen, W. B.; Ohmann, J.; Gregory, M.; Roberts, H.; Meigs, G. W.; Nelson, P.; Pfaff, E.
2012-12-01
As the longest running continuous satellite Earth-observation record, data from the Landsat family of sensors have the potential to uniquely reveal temporal dynamics critical to many terrestrial disciplines. The convergence of a free-data access policy in the late 2000s with a rapid rise in computing and storage capacity has highlighted an increasinagly common challenge: effective distillation of information from large digital datasets. Here, we describe how an algorithmic workflow informed by basic understanding of ecological processes is being used to convert multi-terabyte image time-series datasets into concise renditions of landscape dynamics. Using examples from our own work, we show how these are in turn applied to monitor vegetative disturbance and growth dynamics in national parks, to evaluate effectiveness of natural resource policy in national forests, to constrain and inform biogeochemical models, to measure carbon impacts of natural and anthropogenic stressors, to assess impacts of land use change on threatened species, to educate and inform students, and to better characterize complex links between changing climate, insect pathogens, and wildfire in forests.
NASA Astrophysics Data System (ADS)
Hashemi-Dezaki, Hamed; Mohammadalizadeh-Shabestary, Masoud; Askarian-Abyaneh, Hossein; Rezaei-Jegarluei, Mohammad
2014-01-01
In electrical distribution systems, a great amount of power are wasting across the lines, also nowadays power factors, voltage profiles and total harmonic distortions (THDs) of most loads are not as would be desired. So these important parameters of a system play highly important role in wasting money and energy, and besides both consumers and sources are suffering from a high rate of distortions and even instabilities. Active power filters (APFs) are innovative ideas for solving of this adversity which have recently used instantaneous reactive power theory. In this paper, a novel method is proposed to optimize the allocation of APFs. The introduced method is based on the instantaneous reactive power theory in vectorial representation. By use of this representation, it is possible to asses different compensation strategies. Also, APFs proper placement in the system plays a crucial role in either reducing the losses costs and power quality improvement. To optimize the APFs placement, a new objective function has been defined on the basis of five terms: total losses, power factor, voltage profile, THD and cost. Genetic algorithm has been used to solve the optimization problem. The results of applying this method to a distribution network illustrate the method advantages.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Solitro, Giovanni F; Amirouche, Farid
2016-04-01
Pedicle screws are typically used for fusion, percutaneous fixation, and means of gripping a spinal segment. The screws act as a rigid and stable anchor points to bridge and connect with a rod as part of a construct. The foundation of the fusion is directly related to the placement of these screws. Malposition of pedicle screws causes intraoperative complications such as pedicle fractures and dural lesions and is a contributing factor to fusion failure. Computer assisted spine surgery (CASS) and patient-specific drill templates were developed to reduce this failure rate, but the trajectory of the screws remains a decision driven by anatomical landmarks often not easily defined. Current data shows the need of a robust and reliable technique that prevents screw misplacement. Furthermore, there is a need to enhance screw insertion guides to overcome the distortion of anatomical landmarks, which is viewed as a limiting factor by current techniques. The objective of this study is to develop a method and mathematical lemmas that are fundamental to the development of computer algorithms for pedicle screw placement. Using the proposed methodology, we show how we can generate automated optimal safe screw insertion trajectories based on the identification of a set of intrinsic parameters. The results, obtained from the validation of the proposed method on two full thoracic segments, are similar to previous morphological studies. The simplicity of the method, being pedicle arch based, is applicable to vertebrae where landmarks are either not well defined, altered or distorted.
Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.
Leylavi Shoushtari, Ali
2016-01-01
Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.
A novel algorithm for ventricular arrhythmia classification using a fuzzy logic approach.
Weixin, Nong
2016-12-01
In the present study, it has been shown that an unnecessary implantable cardioverter-defibrillator (ICD) shock is often delivered to patients with an ambiguous ECG rhythm in the overlap zone between ventricular tachycardia (VT) and ventricular fibrillation (VF); these shocks significantly increase mortality. Therefore, accurate classification of the arrhythmia into VT, organized VF (OVF) or disorganized VF (DVF) is crucial to assist ICDs to deliver appropriate therapy. A classification algorithm using a fuzzy logic classifier was developed for accurately classifying the arrhythmias into VT, OVF or DVF. Compared with other studies, our method aims to combine ten ECG detectors that are calculated in the time domain and the frequency domain in addition to different levels of complexity for detecting subtle structure differences between VT, OVF and DVF. The classification in the overlap zone between VT and VF is refined by this study to avoid ambiguous identification. The present method was trained and tested using public ECG signal databases. A two-level classification was performed to first detect VT with an accuracy of 92.6 %, and then the discrimination between OVF and DVF was detected with an accuracy of 84.5 %. The validation results indicate that the proposed method has superior performance in identifying the organization level between the three types of arrhythmias (VT, OVF and DVF) and is promising for improving the appropriate therapy choice and decreasing the possibility of sudden cardiac death.
Absolute GPS Positioning Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ramillien, G.
A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.
Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach
Kim, Sungwook
2014-01-01
Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes. PMID:25032241
New Decentralized Algorithms for Spacecraft Formation Control Based on a Cyclic Approach
2010-06-01
switching strategy to deal with the effects of Earth magnetic field causing angular momentum-build up in low Earth orbiting systems. The control strategies...Fig. 3-5b) how the control effort reduces as the spacecraft reach the desired low -effort trajectories. In this case the orbits are coplanar and the...2.2.3 Consensus problem and approach to formation control . . . . 2.2.4 Contraction analysis and synchronization . . . . . . . . . . . . 2.2.5 Cyclic
2001-06-01
fin during maneuvers at high angles of attack. in the IFOST Program test facility in Australia. The An initial approach to minimize the problem...controller countries within The Technical Co-operation Program , robustness under different excitation loads. (TTCP) that include the F/A-18 in their fleets...The TTCP is a program of technical collaboration and data exchange among five nations: Canada, the United NASTRAN Model States, Australia, United
Chou, Ting-Chao
2011-01-01
The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the “median” is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development. PMID:22016837
NASA Astrophysics Data System (ADS)
Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.
2015-09-01
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
NASA Astrophysics Data System (ADS)
Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.
2014-09-01
Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search
Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming
2007-08-31
Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.
Latifoğlu, Fatma
2013-09-01
In this study a novel approach based on 2D FIR filters is presented for denoising digital images. In this approach the filter coefficients of 2D FIR filters were optimized using the Artificial Bee Colony (ABC) algorithm. To obtain the best filter design, the filter coefficients were tested with different numbers (3×3, 5×5, 7×7, 11×11) and connection types (cascade and parallel) during optimization. First, the speckle noise with variances of 1, 0.6, 0.8 and 0.2 respectively was added to the synthetic test image. Later, these noisy images were denoised with both the proposed approach and other well-known filter types such as Gaussian, mean and average filters. For image quality determination metrics such as mean square error (MSE), peak signal-to-noise ratio (PSNR) and signal-to-noise ratio (SNR) were used. Even in the case of noise having maximum variance (the most noisy), the proposed approach performed better than other filtering methods did on the noisy test images. In addition to test images, speckle noise with a variance of 1 was added to a fetal ultrasound image, and this noisy image was denoised with very high PSNR and SNR values. The performance of the proposed approach was also tested on several clinical ultrasound images such as those obtained from ovarian, abdomen and liver tissues. The results of this study showed that the 2D FIR filters designed based on ABC optimization can eliminate speckle noise quite well on noise added test images and intrinsically noisy ultrasound images.
OPC recipe optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Asthana, Abhishek; Wilkinson, Bill; Power, Dave
2016-03-01
Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.
NASA Astrophysics Data System (ADS)
Nasiri, G. Reza; Davoudpour, Hamid; Movahedi, Yaser
Distribution decisions play an important role in the strategic planning of supply chain management. In order to use the most proper strategic decisions in a supply chain, decision makers should focus on the identification and management of the sources of uncertainties in the supply chain process. In this paper these conditions in a multi-period problem with demands changed over the planning horizon is considered. We develop a non-linear mixed-integer model and propose an efficient heuristic genetic based algorithm which finds the optimal facility locations/allocation, relocation times and the total cost, for the whole supply chain. To explore the viability and efficiency of the proposed model and the solution approach, various computational experiments are performed based on the real size case problems.
Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo
2013-12-01
One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed.
Patel, Vishal K; Naik, Sagar K; Naidich, David P; Travis, William D; Weingarten, Jeremy A; Lazzaro, Richard; Gutterman, David D; Wentowski, Catherine; Grosu, Horiana B; Raoof, Suhail
2013-03-01
The solitary pulmonary nodule (SPN) is frequently encountered on chest imaging and poses an important diagnostic challenge to clinicians. The differential diagnosis is broad, ranging from benign granulomata and infectious processes to malignancy. Important concepts in the evaluation of SPNs include the definition, morphologic characteristics via appropriate imaging modalities, and the calculation of pretest probability of malignancy. Morphologic differentiation of SPN into solid or subsolid types is important in the choice of follow-up and further management. In this first part of a two-part series, we describe the morphologic characteristics and various imaging modalities available to further characterize SPN. In Part 2, we will describe the determination of pretest probability of malignancy and an algorithmic approach to the diagnosis of SPN.
NASA Astrophysics Data System (ADS)
Sharlandjiev, P. S.; Nazarova, D. I.
2013-11-01
The optical characteristics of tantalum pentoxide films, deposited on Si(100) substrate by reactive sputtering, are studied. These films are investigated as high-kappa materials for the needs of nano-electronics, i.e. design of dynamic random access memories, etc. One problem in their implementation is that metal oxides are thermodynamically unstable with Si and an interfacial layer is formed between the oxide film and the silicon substrate during the deposition process. Herein, the center of attention is on the optical properties of that interfacial layer, which is studied by spectral photometric measurements. The evaluation of the optical parameters of the structure is fulfilled with the genetic algorithm approach. The spectral range of evaluation covers deep UV to NIR. The equivalent physical thickness (2.5 nm) and the equivalent refractive index of the interfacial layer are estimated from 236 to 750 nm as well as the thickness of the tantalum pentoxide film (9.5 nm).
NASA Astrophysics Data System (ADS)
Kurster, M.
1993-07-01
A newly developed method for the Doppler imaging of star spot distributions on active late-type stars is presented. It comprises an algorithm particularly adapted to the (discrete) Doppler imaging problem (including eclipses) and is very efficient in determining the positions and shapes of star spots. A variety of tests demonstrates the capabilities as well as the limitations of the method by investigating the effects that uncertainties in various stellar parameters have on the image reconstruction. Any systematic errors within the reconstructed image are found to be a result of the ill-posed nature of the Doppler imaging problem and not a consequence of the adopted approach. The largest uncertainties are found with respect to the dynamical range of the image (brightness or temperature contrast). This kind of uncertainty is of little effect for studies of star spot migrations with the objectives of determining differential rotation and butterfly diagrams for late-type stars.
NASA Astrophysics Data System (ADS)
Darne, Chinmay; Lu, Yujie; Sevick-Muraca, Eva M.
2014-01-01
Emerging fluorescence and bioluminescence tomography approaches have several common, yet several distinct features from established emission tomographies of PET and SPECT. Although both nuclear and optical imaging modalities involve counting of photons, nuclear imaging techniques collect the emitted high energy (100-511 keV) photons after radioactive decay of radionuclides while optical techniques count low-energy (1.5-4.1 eV) photons that are scattered and absorbed by tissues requiring models of light transport for quantitative image reconstruction. Fluorescence imaging has been recently translated into clinic demonstrating high sensitivity, modest tissue penetration depth, and fast, millisecond image acquisition times. As a consequence, the promise of quantitative optical tomography as a complement of small animal PET and SPECT remains high. In this review, we summarize the different instrumentation, methodological approaches and schema for inverse image reconstructions for optical tomography, including luminescence and fluorescence modalities, and comment on limitations and key technological advances needed for further discovery research and translation.
Performance Evaluation of the Approaches and Algorithms using Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Lee, Hanbong; Jung, Yoon; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian
2016-01-01
The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft58; Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon
2016-01-01
The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.
mRAISE: an alternative algorithmic approach to ligand-based virtual screening
NASA Astrophysics Data System (ADS)
von Behren, Mathias M.; Bietz, Stefan; Nittinger, Eva; Rarey, Matthias
2016-08-01
Ligand-based virtual screening is a well established method to find new lead molecules in todays drug discovery process. In order to be applicable in day to day practice, such methods have to face multiple challenges. The most important part is the reliability of the results, which can be shown and compared in retrospective studies. Furthermore, in the case of 3D methods, they need to provide biologically relevant molecular alignments of the ligands, that can be further investigated by a medicinal chemist. Last but not least, they have to be able to screen large databases in reasonable time. Many algorithms for ligand-based virtual screening have been proposed in the past, most of them based on pairwise comparisons. Here, a new method is introduced called mRAISE. Based on structural alignments, it uses a descriptor-based bitmap search engine (RAISE) to achieve efficiency. Alignments created on the fly by the search engine get evaluated with an independent shape-based scoring function also used for ranking of compounds. The correct ranking as well as the alignment quality of the method are evaluated and compared to other state of the art methods. On the commonly used Directory of Useful Decoys dataset mRAISE achieves an average area under the ROC curve of 0.76, an average enrichment factor at 1 % of 20.2 and an average hit rate at 1 % of 55.5. With these results, mRAISE is always among the top performing methods with available data for comparison. To access the quality of the alignments calculated by ligand-based virtual screening methods, we introduce a new dataset containing 180 prealigned ligands for 11 diverse targets. Within the top ten ranked conformations, the alignment closest to X-ray structure calculated with mRAISE has a root-mean-square deviation of less than 2.0 Å for 80.8 % of alignment pairs and achieves a median of less than 2.0 Å for eight of the 11 cases. The dataset used to rate the quality of the calculated alignments is freely available at
mRAISE: an alternative algorithmic approach to ligand-based virtual screening.
von Behren, Mathias M; Bietz, Stefan; Nittinger, Eva; Rarey, Matthias
2016-08-01
Ligand-based virtual screening is a well established method to find new lead molecules in todays drug discovery process. In order to be applicable in day to day practice, such methods have to face multiple challenges. The most important part is the reliability of the results, which can be shown and compared in retrospective studies. Furthermore, in the case of 3D methods, they need to provide biologically relevant molecular alignments of the ligands, that can be further investigated by a medicinal chemist. Last but not least, they have to be able to screen large databases in reasonable time. Many algorithms for ligand-based virtual screening have been proposed in the past, most of them based on pairwise comparisons. Here, a new method is introduced called mRAISE. Based on structural alignments, it uses a descriptor-based bitmap search engine (RAISE) to achieve efficiency. Alignments created on the fly by the search engine get evaluated with an independent shape-based scoring function also used for ranking of compounds. The correct ranking as well as the alignment quality of the method are evaluated and compared to other state of the art methods. On the commonly used Directory of Useful Decoys dataset mRAISE achieves an average area under the ROC curve of 0.76, an average enrichment factor at 1 % of 20.2 and an average hit rate at 1 % of 55.5. With these results, mRAISE is always among the top performing methods with available data for comparison. To access the quality of the alignments calculated by ligand-based virtual screening methods, we introduce a new dataset containing 180 prealigned ligands for 11 diverse targets. Within the top ten ranked conformations, the alignment closest to X-ray structure calculated with mRAISE has a root-mean-square deviation of less than 2.0 Å for 80.8 % of alignment pairs and achieves a median of less than 2.0 Å for eight of the 11 cases. The dataset used to rate the quality of the calculated alignments is freely available
An approach to the development and analysis of wind turbine control algorithms
Wu, K.C.
1998-03-01
The objective of this project is to develop the capability of symbolically generating an analytical model of a wind turbine for studies of control systems. This report focuses on a theoretical formulation of the symbolic equations of motion (EOMs) modeler for horizontal axis wind turbines. In addition to the power train dynamics, a generic 7-axis rotor assembly is used as the base model from which the EOMs of various turbine configurations can be derived. A systematic approach to generate the EOMs is presented using d`Alembert`s principle and Lagrangian dynamics. A Matlab M file was implemented to generate the EOMs of a two-bladed, free yaw wind turbine. The EOMs will be compared in the future to those of a similar wind turbine modeled with the YawDyn code for verification. This project was sponsored by Sandia National Laboratories as part of the Adaptive Structures and Control Task. This is the final report of Sandia Contract AS-0985.
Road detection in spaceborne SAR images using genetic algorithm
NASA Astrophysics Data System (ADS)
Jeon, Byoungki; Jang, JeongHun; Hong, KiSang
2000-08-01
This paper presents a technique for detection of roads in a spaceborne SAR image using a genetic algorithm. Roads in a spaceborne SAR image can be modelled as curvilinear structures with some thickness. Curve segments, which represent candidate positions of roads, are extracted from the image using a curvilinear structure detector, and roads are detected accurately by grouping those curve segments. For this purpose, we designed a grouping method based on a genetic algorithm (GA), which is one of the global optimization methods, combined perceptual grouping factors with it, and tried to reduce its overall computational cost by introducing an operation of thresholding and a concept of region growing. To detect roads more accurately, postprocessing, including noisy curve segment removal, is performed after grouping. We applied our method to ERS-1 SAR images that have a resolution of about 30 meters, and the experimental results show that our method can detect roads accurately, and is much faster than a globally applied GA approach.
Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting
2016-01-01
Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus, which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge. PMID:27660763
Shouval, R; Bondi, O; Mishan, H; Shimoni, A; Unger, R; Nagler, A
2014-03-01
Data collected from hematopoietic SCT (HSCT) centers are becoming more abundant and complex owing to the formation of organized registries and incorporation of biological data. Typically, conventional statistical methods are used for the development of outcome prediction models and risk scores. However, these analyses carry inherent properties limiting their ability to cope with large data sets with multiple variables and samples. Machine learning (ML), a field stemming from artificial intelligence, is part of a wider approach for data analysis termed data mining (DM). It enables prediction in complex data scenarios, familiar to practitioners and researchers. Technological and commercial applications are all around us, gradually entering clinical research. In the following review, we would like to expose hematologists and stem cell transplanters to the concepts, clinical applications, strengths and limitations of such methods and discuss current research in HSCT. The aim of this review is to encourage utilization of the ML and DM techniques in the field of HSCT, including prediction of transplantation outcome and donor selection.
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Jung, Yoon; Lee, Hanbong; Schier, Sebastian; Okuniek, Nikolai; Gerdes, Ingrid
2016-01-01
In this work, fast-time simulations have been conducted using SARDA tools at Hamburg airport by NASA and real-time simulations using CADEO and TRACC with the NLR ATM Research Simulator (NARSIM) by DLR. The outputs are analyzed using a set of common metrics collaborated between DLR and NASA. The proposed metrics are derived from International Civil Aviation Organization (ICAO)s Key Performance Areas (KPAs) in capability, efficiency, predictability and environment, and adapted to simulation studies. The results are examined to explore and compare the merits and shortcomings of the two approaches using the common performance metrics. Particular attention is paid to the concept of the close-loop, trajectory-based taxi as well as the application of US concept to the European airport. Both teams consider the trajectory-based surface operation concept a critical technology advance in not only addressing the current surface traffic management problems, but also having potential application in unmanned vehicle maneuver on airport surface, such as autonomous towing or TaxiBot [6][7] and even Remote Piloted Aircraft (RPA). Based on this work, a future integration of TRACC and SOSS is described aiming at bringing conflict-free trajectory-based operation concept to US airport.
NASA Astrophysics Data System (ADS)
Zhou, Mandi; Shu, Jiong; Chen, Zhigang; Ji, Minhe
2012-11-01
Hyperspectral imagery has been widely used in terrain classification for its high resolution. Urban vegetation, known as an essential part of the urban ecosystem, can be difficult to discern due to high similarity of spectral signatures among some land-cover classes. In this paper, we investigate a hybrid approach of the genetic-algorithm tuned fuzzy support vector machine (GA-FSVM) technique and apply it to urban vegetation classification from aerial hyperspectral urban imagery. The approach adopts the genetic algorithm to optimize parameters of support vector machine, and employs the K-nearest neighbor algorithm to calculate the membership function for each fuzzy parameter, aiming to reduce the effects of the isolated and noisy samples. Test data come from push-broom hyperspectral imager (PHI) hyperspectral remote sensing image which partially covers a corner of the Shanghai World Exposition Park, while PHI is a hyper-spectral sensor developed by Shanghai Institute of Technical Physics. Experimental results show the GA-FSVM model generates overall accuracy of 71.2%, outperforming the maximum likelihood classifier with 49.4% accuracy and the artificial neural network method with 60.8% accuracy. It indicates GA-FSVM is a promising model for vegetation classification from hyperspectral urban data, and has good advantage in the application of classification involving abundant mixed pixels and small samples problem.
High efficiency epitaxial GaAs/GaAs and GaAs/Ge solar cell technology using OM/CVD
NASA Technical Reports Server (NTRS)
Wang, K. L.; Yeh, Y. C. M.; Stirn, R. J.; Swerdling, S.
1980-01-01
A technology for fabricating high efficiency, thin film GaAs solar cells on substrates appropriate for space and/or terrestrial applications was developed. The approach adopted utilizes organometallic chemical vapor deposition (OM-CVD) to form a GaAs layer epitaxially on a suitably prepared Ge epi-interlayer deposited on a substrate, especially a light weight silicon substrate which can lead to a 300 watt per kilogram array technology for space. The proposed cell structure is described. The GaAs epilayer growth on single crystal GaAs and Ge wafer substrates were investigated.
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Kumar, Sanjiv; Puniya, Bhanwar Lal; Parween, Shahila; Nahar, Pradip; Ramachandran, Srinivasan
2013-01-01
Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA), Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein) binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase) and Rv0309 (L,D-transpeptidase) bind to fibronectin and laminin. We report Rv2599 (membrane protein), Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Joshuva, A; Sugumaran, V
2017-03-01
Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade.
Use of Algorithm of Changes for Optimal Design of Heat Exchanger
NASA Astrophysics Data System (ADS)
Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.
2010-05-01
For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.
Ultra-Thin, Triple-Bandgap GaInP/GaAs/GaInAs Monolithic Tandem Solar Cells
NASA Technical Reports Server (NTRS)
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, Sarah; Moriarty, T.; Romero, M. J.
2007-01-01
The performance of state-of-the-art, series-connected, lattice-matched (LM), triple-junction (TJ), III-V tandem solar cells could be improved substantially (10-12%) by replacing the Ge bottom subcell with a subcell having a bandgap of approx.1 eV. For the last several years, research has been conducted by a number of organizations to develop approx.1-eV, LM GaInAsN to provide such a subcell, but, so far, the approach has proven unsuccessful. Thus, the need for a high-performance, monolithically integrable, 1-eV subcell for TJ tandems has remained. In this paper, we present a new TJ tandem cell design that addresses the above-mentioned problem. Our approach involves inverted epitaxial growth to allow the monolithic integration of a lattice-mismatched (LMM) approx.1- eV GaInAs/GaInP double-heterostructure (DH) bottom subcell with LM GaAs (middle) and GaInP (top) upper subcells. A transparent GaInP compositionally graded layer facilitates the integration of the LM and LMM components. Handle-mounted, ultra-thin device fabrication is a natural consequence of the inverted-structure approach, which results in a number of advantages, including robustness, potential low cost, improved thermal management, incorporation of back-surface reflectors, and possible reclamation/reuse of the parent crystalline substrate for further cost reduction. Our initial work has concerned GaInP/GaAs/GaInAs tandem cells grown on GaAs substrates. In this case, the 1- eV GaInAs experiences 2.2% compressive LMM with respect to the substrate. Specially designed GaInP graded layers are used to produce 1-eV subcells with performance parameters nearly equaling those of LM devices with the same bandgap (e.g., LM, 1-eV GaInAsP grown on InP). Previously, we reported preliminary ultra-thin tandem devices (0.237 cm2) with NREL-confirmed efficiencies of 31.3% (global spectrum, one sun) (1), 29.7% (AM0 spectrum, one sun) (2), and 37.9% (low-AOD direct spectrum, 10.1 suns) (3), all at 25 C. Here, we include
Adding learning to cellular genetic algorithms for training recurrent neural networks.
Ku, K W; Mak, M W; Siu, W C
1999-01-01
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared. Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning. The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-12-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Unfolding the band structure of GaAsBi
NASA Astrophysics Data System (ADS)
Maspero, R.; Sweeney, S. J.; Florescu, Marian
2017-02-01
Typical supercell approaches used to investigate the electronic properties of GaAs(1-x)Bi(x) produce highly accurate, but folded, band structures. Using a highly optimized algorithm, we unfold the band structure to an approximate E≤ft(\\mathbf{k}\\right) relation associated with an effective Brillouin zone. The dispersion relations we generate correlate strongly with experimental results, confirming that a regime of band gap energy greater than the spin-orbit-splitting energy is reached at around 10% bismuth fraction. We also demonstrate the effectiveness of the unfolding algorithm throughout the Brillouin zone (BZ), which is key to enabling transition rate calculations, such as Auger recombination rates. Finally, we show the effect of disorder on the effective masses and identify approximate values for the effective mass of the conduction band and valence bands for bismuth concentrations from 0-12%.
Improved interpretation of satellite altimeter data using genetic algorithms
NASA Technical Reports Server (NTRS)
Messa, Kenneth; Lybanon, Matthew
1992-01-01
Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.
Ligand "Brackets" for Ga-Ga Bond.
Fedushkin, Igor L; Skatova, Alexandra A; Dodonov, Vladimir A; Yang, Xiao-Juan; Chudakova, Valentina A; Piskunov, Alexander V; Demeshko, Serhiy; Baranov, Evgeny V
2016-09-06
The reactivity of digallane (dpp-Bian)Ga-Ga(dpp-Bian) (1) (dpp-Bian = 1,2-bis[(2,6-diisopropylphenyl)imino]acenaphthene) toward acenaphthenequinone (AcQ), sulfur dioxide, and azobenzene was investigated. The reaction of 1 with AcQ in 1:1 molar ratio proceeds via two-electron reduction of AcQ to give (dpp-Bian)Ga(μ2-AcQ)Ga(dpp-Bian) (2), in which diolate [AcQ](2-) acts as "bracket" for the Ga-Ga bond. The interaction of 1 with AcQ in 1:2 molar ratio proceeds with an oxidation of the both dpp-Bian ligands as well as of the Ga-Ga bond to give (dpp-Bian)Ga(μ2-AcQ)2Ga(dpp-Bian) (3). At 330 K in toluene complex 2 decomposes to give compounds 3 and 1. The reaction of complex 2 with atmospheric oxygen results in oxidation of a Ga-Ga bond and affords (dpp-Bian)Ga(μ2-AcQ)(μ2-O)Ga(dpp-Bian) (4). The reaction of digallane 1 with SO2 produces, depending on the ratio (1:2 or 1:4), dithionites (dpp-Bian)Ga(μ2-O2S-SO2)Ga(dpp-Bian) (5) and (dpp-Bian)Ga(μ2-O2S-SO2)2Ga(dpp-Bian) (6). In compound 5 the Ga-Ga bond is preserved and supported by dithionite dianionic bracket. In compound 6 the gallium centers are bridged by two dithionite ligands. Both 5 and 6 consist of dpp-Bian radical anionic ligands. Four-electron reduction of azobenzene with 1 mol equiv of digallane 1 leads to complex (dpp-Bian)Ga(μ2-NPh)2Ga(dpp-Bian) (7). Paramagnetic compounds 2-7 were characterized by electron spin resonance spectroscopy, and their molecular structures were established by single-crystal X-ray analysis. Magnetic behavior of compounds 2, 5, and 6 was investigated by superconducting quantum interference device technique in the range of 2-295 K.
RBT-GA: a novel metaheuristic for solving the multiple sequence alignment problem
Taheri, Javid; Zomaya, Albert Y
2009-01-01
Background Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. Results This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. Conclusion RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences. PMID:19594869
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Chen, I-Cherng; Lin, Shiu-Shiung; Lin, Tsao-Jen; Hsu, Cheng-Liang; Hsueh, Ting Jen; Shieh, Tien-Yu
2010-01-01
The application of novel core-shell nanowires composed of ZnGa2O4/ZnO to improve the sensitivity of NO2 gas sensors is demonstrated in this study. The growth of ZnGa2O4/ZnO core-shell nanowires is performed by reactive evaporation on patterned ZnO:Ga/SiO2/Si templates at 600 °C. This is to form the homogeneous structure of the sensors investigated in this report to assess their sensitivity in terms of NO2 detection. These novel NO2 gas sensors were evaluated at working temperatures of 25 °C and at 250 °C, respectively. The result reveals the ZnGa2O4/ZnO core-shell nanowires present a good linear relationship (R2 > 0.99) between sensitivity and NO2 concentration at both working temperatures. These core-shell nanowire sensors also possess the highest response (<90 s) and recovery (<120 s) values with greater repeatability seen for NO2 sensors at room temperature, unlike traditional sensors that only work effectively at much higher temperatures. The data in this study indicates the newly-developed ZnGa2O4/ZnO core-shell nanowire based sensors are highly promising for industrial applications. PMID:22319286
NASA Astrophysics Data System (ADS)
Azam, Sikander; Khan, Saleem Ayaz; Goumri-Said, Souraya
2016-01-01
Metal chalcogenide semiconductors have a significant role in the development of materials for energy and nanotechnology applications. First principle calculations were applied on CsAgGa2Se4 to investigate its optoelectronic structure and bonding characteristics, using the full-potential linear augmented plane wave method within the framework of generalized gradient approximations (GGA) and Engel-Vosko GGA functionals (EV-GGA). The band structure from EV-GGA shows that the valence band maximum and conduction band minimum are situated at Γ with a band gap value of 2.15 eV. A mixture of orbitals from Ag 4 p 6/4 d 10, Se 3 d 10, Ga 4 p 1, Se 4 p 4 , and Ga 4 s 2 states have a primary role to lead to a semiconducting character of the present chalcogenide. The charge density iso-surface shows a strong covalent bonding between Ag-Se and Ga-Se atoms. The imaginary part of dielectric constant reveals that the threshold (first optical critical point) energy of dielectric function occurs 2.15 eV. It is obvious that with a direct large band gap and large absorption coefficient, CsAgGa2Se4 might be considered a potential material for photovoltaic applications.
Wu, Jingheng; Mei, Juan; Wen, Sixiang; Liao, Siyan; Chen, Jincan; Shen, Yong
2010-07-30
Based on the quantitative structure-activity relationships (QSARs) models developed by artificial neural networks (ANNs), genetic algorithm (GA) was used in the variable-selection approach with molecule descriptors and helped to improve the back-propagation training algorithm as well. The cross validation techniques of leave-one-out investigated the validity of the generated ANN model and preferable variable combinations derived in the GAs. A self-adaptive GA-ANN model was successfully established by using a new estimate function for avoiding over-fitting phenomenon in ANN training. Compared with the variables selected in two recent QSAR studies that were based on stepwise multiple linear regression (MLR) models, the variables selected in self-adaptive GA-ANN model are superior in constructing ANN model, as they revealed a higher cross validation (CV) coefficient (Q(2)) and a lower root mean square deviation both in the established model and biological activity prediction. The introduced methods for validation, including leave-multiple-out, Y-randomization, and external validation, proved the superiority of the established GA-ANN models over MLR models in both stability and predictive power. Self-adaptive GA-ANN showed us a prospect of improving QSAR model.
Ban, Hiroshi; Yamamoto, Hiroki
2013-05-31
In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Singh, S; Modi, S; Bagga, D; Kaur, P; Shankar, L R; Khushu, S
2013-03-01
The present study aimed to investigate whether brain morphological differences exist between adult hypothyroid subjects and age-matched controls using voxel-based morphometry (VBM) with diffeomorphic anatomic registration via an exponentiated lie algebra algorithm (DARTEL) approach. High-resolution structural magnetic resonance images were taken in ten healthy controls and ten hypothyroid subjects. The analysis was conducted using statistical parametric mapping. The VBM study revealed a reduction in grey matter volume in the left postcentral gyrus and cerebellum of hypothyroid subjects compared to controls. A significant reduction in white matter volume was also found in the cerebellum, right inferior and middle frontal gyrus, right precentral gyrus, right inferior occipital gyrus and right temporal gyrus of hypothyroid patients compared to healthy controls. Moreover, no meaningful cluster for greater grey or white matter volume was obtained in hypothyroid subjects compared to controls. Our study is the first VBM study of hypothyroidism in an adult population and suggests that, compared to controls, this disorder is associated with differences in brain morphology in areas corresponding to known functional deficits in attention, language, motor speed, visuospatial processing and memory in hypothyroidism.
Thompson, Alexander E; Meredig, Bryce; Wolverton, C
2014-03-12
We have created an improved xenon interatomic potential for use with existing UO2 potentials. This potential was fit to density functional theory calculations with the Hubbard U correction (DFT + U) using a genetic algorithm approach called iterative potential refinement (IPR). We examine the defect energetics of the IPR-fitted xenon interatomic potential as well as other, previously published xenon potentials. We compare these potentials to DFT + U derived energetics for a series of xenon defects in a variety of incorporation sites (large, intermediate, and small vacant sites). We find the existing xenon potentials overestimate the energy needed to add a xenon atom to a wide set of defect sites representing a range of incorporation sites, including failing to correctly rank the energetics of the small incorporation site defects (xenon in an interstitial and xenon in a uranium site neighboring uranium in an interstitial). These failures are due to problematic descriptions of Xe-O and/or Xe-U interactions of the previous xenon potentials. These failures are corrected by our newly created xenon potential: our IPR-generated potential gives good agreement with DFT + U calculations to which it was not fitted, such as xenon in an interstitial (small incorporation site) and xenon in a double Schottky defect cluster (large incorporation site). Finally, we note that IPR is very flexible and can be applied to a wide variety of potential forms and materials systems, including metals and EAM potentials.
Fusion of qualitative bond graph and genetic algorithms: a fault diagnosis application.
Lo, C H; Wong, Y K; Rad, A B; Chow, K M
2002-10-01
In this paper, the problem of fault diagnosis via integration of genetic algorithms (GA's) and qualitative bond graphs (QBG's) is addressed. We suggest that GA's can be used to search for possible fault components among a system of qualitative equations. The QBG is adopted as the modeling scheme to generate a set of qualitative equations. The qualitative bond graph provides a unified approach for modeling engineering systems, in particular, mechatronic systems. In order to demonstrate the performance of the proposed algorithm, we have tested the proposed algorithm on an in-house designed and built floating disc experimental setup. Results from fault diagnosis in the floating disc system are presented and discussed. Additional measurements will be required to localize the fault when more than one fault candidate is inferred. Fault diagnosis is activated by a fault detection mechanism when a discrepancy between measured abnormal behavior and predicted system behavior is observed. The fault detection mechanism is not presented here.
Edge detection based on genetic algorithm and sobel operator in image
NASA Astrophysics Data System (ADS)
Tong, Xin; Ren, Aifeng; Zhang, Haifeng; Ruan, Hang; Luo, Ming
2011-10-01
Genetic algorithm (GA) is widely used as the optimization problems using techniques inspired by natural evolution. In this paper we present a new edge detection technique based on GA and sobel operator. The sobel edge detection built in DSP Builder is first used to determine the boundaries of objects within an image. Then the genetic algorithm using SOPC Builder proposes a new threshold algorithm for the image processing. Finally, the performance of the new edge detection technique-based the best threshold approaches in DSP Builder and Quartus II software is compared both qualitatively and quantitatively with the single sobel operator. The new edge detection technique is shown to perform very well in terms of robustness to noise, edge search capability and quality of the final edge image.
Simplified 2DEG carrier concentration model for composite barrier AlGaN/GaN HEMT
Das, Palash Biswas, Dhrubes
2014-04-24
The self consistent solution of Schrodinger and Poisson equations is used along with the total charge depletion model and applied with a novel approach of composite AlGaN barrier based HEMT heterostructure. The solution leaded to a completely new analytical model for Fermi energy level vs. 2DEG carrier concentration. This was eventually used to demonstrate a new analytical model for the temperature dependent 2DEG carrier concentration in AlGaN/GaN HEMT.
Optimisation of assembly scheduling in VCIM systems using genetic algorithm
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-01-01
Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.
GaInP/GaAs/GaInAs Monolithic Tandem Cells for High-Performance Solar Concentrators
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, S.; Moriarty, T.; Romero, M. J.
2005-08-01
We present a new approach for ultra-high-performance tandem solar cells that involves inverted epitaxial growth and ultra-thin device processing. The additional degree of freedom afforded by the inverted design allows the monolithic integration of high-, and medium-bandgap, lattice-matched (LM) subcell materials with lower-bandgap, lattice-mismatched (LMM) materials in a tandem structure through the use of transparent compositionally graded layers. The current work concerns an inverted, series-connected, triple-bandgap, GaInP (LM, 1.87 eV)/GaAs (LM, 1.42 eV)/GaInAs (LMM, {approx}1 eV) device structure grown on a GaAs substrate. Ultra-thin tandem devices are fabricated by mounting the epiwafers to pre-metallized Si wafer handles and selectively removing the parent GaAs substrate. The resulting handle-mounted, ultra-thin tandem cells have a number of important advantages, including improved performance and potential reclamation/reuse of the parent substrate for epitaxial growth. Additionally, realistic performance modeling calculations suggest that terrestrial concentrator efficiencies in the range of 40-45% are possible with this new tandem cell approach. A laboratory-scale (0.24 cm2), prototype GaInP/GaAs/GaInAs tandem cell with a terrestrial concentrator efficiency of 37.9% at a low concentration ratio (10.1 suns) is described, which surpasses the previous world efficiency record of 37.3%.
Optimal groundwater remediation using artificial neural networks and the genetic algorithm
Rogers, Leah L.
1992-08-01
An innovative computational approach for the optimization of groundwater remediation is presented which uses artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict an aspect of the outcome of a flow and transport simulation. Then the GA searches through realizations or patterns of pumping and uses the trained network to predict the outcome of the realizations. This approach has advantages of parallel processing of the groundwater simulations and the ability to ``recycle`` or reuse the base of knowledge formed by these simulations. These advantages offer reduction of computational burden of the groundwater simulations relative to a more conventional approach which uses nonlinear programming (NLP) with a quasi-newtonian search. Also the modular nature of this approach facilitates substitution of different groundwater simulation models.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.
NASA Astrophysics Data System (ADS)
Tsatsulnikov, A. F.; Lundin, W. V.; Sakharov, A. V.; Zavarin, E. E.; Usov, S. O.; Nikolaev, A. E.; Kryzhanovskaya, N. V.; Chernyakov, A. E.; Zakgeim, A. L.; Cherkashin, N. A.; Hytch, M.
2011-12-01
This work presents the results of the investigation of approaches to the synthesis of the active region of LED with extended optical range. Combination of short-period InGaN/GaN superlattice and InGaN quantum well was applied to extend optical range of emission up to 560 nm. Monolithic white LED structures containing two blue and one green QWs separated by the short-period InGaN/GaN superlattice were grown with external quantum efficiency up to 5-6%.
Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R
2013-09-01
Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.
NASA Astrophysics Data System (ADS)
Chen, Fang; Chang, Honglong; Yuan, Weizheng; Wilcock, Reuben; Kraft, Michael
2012-10-01
This paper describes a novel multiobjective parameter optimization method based on a genetic algorithm (GA) for the design of a sixth-order continuous-time, force feedback band-pass sigma-delta modulator (BP-ΣΔM) interface for the sense mode of a MEMS gyroscope. The design procedure starts by deriving a parameterized Simulink model of the BP-ΣΔM gyroscope interface. The system parameters are then optimized by the GA. Consequently, the optimized design is tested for robustness by a Monte Carlo analysis to find a solution that is both optimal and robust. System level simulations result in a signal-to-noise ratio (SNR) larger than 90 dB in a bandwidth of 64 Hz with a 200° s-1 angular rate input signal; the noise floor is about -100 dBV Hz-1/2. The simulations are compared to measured data from a hardware implementation. For zero input rotation with the gyroscope operating at atmospheric pressure, the spectrum of the output bitstream shows an obvious band-pass noise shaping and a deep notch at the gyroscope resonant frequency. The noise floor of measured power spectral density (PSD) of the output bitstream agrees well with simulation of the optimized system level model. The bias stability, rate sensitivity and nonlinearity of the gyroscope controlled by an optimized BP-ΣΔM closed-loop interface are 34.15° h-1, 22.3 mV °-1 s-1, 98 ppm, respectively. This compares to a simple open-loop interface for which the corresponding values are 89° h-1, 14.3 mV °-1 s-1, 7600 ppm, and a nonoptimized BP-ΣΔM closed-loop interface with corresponding values of 60° h-1, 17 mV °-1 s-1, 200 ppm.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
An investigation of messy genetic algorithms
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley
1990-01-01
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.
Computing Algorithms for Nuffield Advanced Physics.
ERIC Educational Resources Information Center
Summers, M. K.
1978-01-01
Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)
First-principle natural band alignment of GaN / dilute-As GaNAs alloy
Tan, Chee-Keong Tansu, Nelson
2015-01-15
Density functional theory (DFT) calculations with the local density approximation (LDA) functional are employed to investigate the band alignment of dilute-As GaNAs alloys with respect to the GaN alloy. Conduction and valence band positions of dilute-As GaNAs alloy with respect to the GaN alloy on an absolute energy scale are determined from the combination of bulk and surface DFT calculations. The resulting GaN / GaNAs conduction to valence band offset ratio is found as approximately 5:95. Our theoretical finding is in good agreement with experimental observation, indicating the upward movements of valence band at low-As content dilute-As GaNAs are mainly responsible for the drastic reduction of the GaN energy band gap. In addition, type-I band alignment of GaN / GaNAs is suggested as a reasonable approach for future device implementation with dilute-As GaNAs quantum well, and possible type-II quantum well active region can be formed by using InGaN / dilute-As GaNAs heterostructure.
NASA Astrophysics Data System (ADS)
Sharma, R. K.; Anil Kumar, A. K.; Xavier James Raj, M.
strategy (Sharma & Anilkumar 2003) adopted for the re-entry prediction of the risk objects by estimating the ballistic coefficient based on the TLEs. The estimation of the ballistic coefficient Bn = m/(CDA), where CD is the drag coefficient, A is the reference area, and m is the mass of the object, is done by minimizing a cost function (variation in the re-entry prediction time) using Genetic algorithm. The KS element equations of motion are numerically integrated with a suitable integration step size with the 4th - order Runge-Kutta-Gill method till the end of the orbital life, by including the Earth's oblateness with J2 to J6 terms, and modelling the air drag forces through an analytical oblate diurnal atmosphere with the density scale height varying with altitude. Jacchia (1977) atmospheric model, which takes into consideration the epoch, daily solar flux (F10.7) and geomagnetic index (Ap) for computation of density and density scale height, is utilized. The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and hence the estimated Bn is not the actual ballistic coefficient but an effective ballistic coefficient. It is demonstrated that the inaccuracies or deficiencies in the inputs, like F10.7 and Ap values, are absorbed in the estimated Bn. The details of the re-entry results based on this approach, utilizing the TLE of debris objects, US Sat No. 25947 and SROSS-C2 Satellite, which re-entered the Earth's atmosphere on 4th March 2000 and 12th July 2001, are provided. Details of the re-entry predictions with respect to the 4th and 5th IADC re-entry campaigns, related to COSMOS 1043 rocket body and COSMOS 389 satellite, which re-entered the Earth's atmosphere on 19 January 2002 and 24 November 2003, respectively, are described. The predicted re-entries were found to be all along quite close to the actual re-entry time, with quite less uncertainties bands on the predictions. A
Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F
2016-10-07
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
NASA Astrophysics Data System (ADS)
Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.
2016-10-01
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
NASA Astrophysics Data System (ADS)
Joe, Hidenori; Akiyama, Toru; Nakamura, Kohji; Kanisawa, Kiyoshi; Ito, Tomonori
2007-04-01
The structural stability of InAs stacking-fault tetrahedron (SFT) in InAs/GaAs (1 1 1) is theoretically investigated. Using an empirical interatomic potential, cohesive energies are calculated for the three types of InAs/GaAs(1 1 1) system where coherent InAs and relaxed InAs with the SFT and misfit dislocations (MDs). The calculated results reveal that InAs with the SFT is more favorable beyond the film thickness of 21 monolayers (MLs) than coherent InAs. The critical film thickness of 21 ML is comparable with that of 8 ML for the MDs generation. This suggests that the SFT appears in InAs thin film layers instead of MDs resulting from lowering the strain energy accumulated in InAs thin film layers.
Experience with OMCVD production of GaAs solar cells
NASA Technical Reports Server (NTRS)
Yeh, Y. C. M.; Iles, P. A.; Ho, P.; Ling, K. S.
1985-01-01
The projected promise of the OMCVD approach, i.e., to make high efficiency GaAs space cells, has been demonstrated. The properties and control of the deposited GaAs and AlGaAs layers and the uniformity of the post layer processing have been most satisfactory. In particular the control of the critical thin layers (p-GaAs, p-AlGaAs) has been impressive. Experience has also been gained in routine areas, connected with continuous operation at high capacity. There are still a few areas for improvement, to further increase capacity, and to anticipate and prevent mechanical equipment problems.
Gao, Ting; Shi, Li-Li; Li, Hai-Bin; Zhao, Shan-Shan; Li, Hui; Sun, Shi-Ling; Su, Zhong-Min; Lu, Ying-Hua
2009-07-07
The combination of genetic algorithm and back-propagation neural network correction approaches (GABP) has successfully improved the calculation accuracy of absorption energies. In this paper, the absorption energies of 160 organic molecules are corrected to test this method. Firstly, the GABP1 is introduced to determine the quantitative relationship between the experimental results and calculations obtained by using quantum chemical methods. After GABP1 correction, the root-mean-square (RMS) deviations of the calculated absorption energies reduce from 0.32, 0.95 and 0.46 eV to 0.14, 0.19 and 0.18 eV for B3LYP/6-31G(d), B3LYP/STO-3G and ZINDO methods, respectively. The corrected results of B3LYP/6-31G(d)-GABP1 are in good agreement with experimental results. Then, the GABP2 is introduced to determine the quantitative relationship between the results of B3LYP/6-31G(d)-GABP1 method and calculations of the low accuracy methods (B3LYP/STO-3G and ZINDO). After GABP2 correction, the RMS deviations of the calculated absorption energies reduce to 0.20 and 0.19 eV for B3LYP/STO-3G and ZINDO methods, respectively. The results show that the RMS deviations after GABP1 and GABP2 correction are similar for B3LYP/STO-3G and ZINDO methods. Thus, the B3LYP/6-31G(d)-GABP1 is a better method to predict absorption energies and can be used as the approximation of experimental results where the experimental results are unknown or uncertain by experimental method. This method may be used for predicting absorption energies of larger organic molecules that are unavailable by experimental methods and by high-accuracy theoretical methods with larger basis sets. The performance of this method was demonstrated by application to the absorption energy of the aldehyde carbazole precursor.
GADISI - Genetic Algorithms Applied to the Automatic Design of Integrated Spiral Inductors
NASA Astrophysics Data System (ADS)
Pereira, Pedro; Fino, M. Helena; Coito, Fernando; Ventim-Neves, Mário
This work introduces a tool for the optimization of CMOS integrated spiral inductors. The main objective of this tool is to offer designers a first approach for the determination of the inductor layout parameters. The core of the tool is a Genetic Algorithm (GA) optimization procedure where technology constraints on the inductor layout parameters are considered. Further constraints regarding inductor design heuristics are also accounted for. Since the layout parameters are inherently discrete due to technology and topology constraints, discrete variable optimization techniques are used. The Matlab GA toolbox is used and the modifications on the GA functions, yielding technology feasible solutions is presented. For the sake of efficiency and simplicity the pi-model is used for characterizing the inductor. The validity of the design results obtained with the tool, is checked against circuit simulation with ASITIC.
Bourke, Alan K; Klenk, Jochen; Schwickert, Lars; Aminian, Kamiar; Ihlen, Espen A F; Mellone, Sabato; Helbostad, Jorunn L; Chiari, Lorenzo; Becker, Clemens
2016-08-01
Automatic fall detection will promote independent living and reduce the consequences of falls in the elderly by ensuring people can confidently live safely at home for linger. In laboratory studies inertial sensor technology has been shown capable of distinguishing falls from normal activities. However less than 7% of fall-detection algorithm studies have used fall data recorded from elderly people in real life. The FARSEEING project has compiled a database of real life falls from elderly people, to gain new knowledge about fall events and to develop fall detection algorithms to combat the problems associated with falls. We have extracted 12 different kinematic, temporal and kinetic related features from a data-set of 89 real-world falls and 368 activities of daily living. Using the extracted features we applied machine learning techniques and produced a selection of algorithms based on different feature combinations. The best algorithm employs 10 different features and produced a sensitivity of 0.88 and a specificity of 0.87 in classifying falls correctly. This algorithm can be used distinguish real-world falls from normal activities of daily living in a sensor consisting of a tri-axial accelerometer and tri-axial gyroscope located at L5.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
Optimizing remediation of an unconfined aquifer using a hybrid algorithm.
Hsiao, Chin-Tsai; Chang, Liang-Cheng
2005-01-01
We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2017-03-01
Effective characterization of lithology is vital for the conceptualization of complex aquifer systems, which is a prerequisite for the development of reliable groundwater-flow and contaminant-transport models. However, such information is often limited for most groundwater basins. This study explores the usefulness and potential of a hybrid soft-computing framework; a traditional artificial neural network with gradient descent-momentum training (ANN-GDM) and a traditional genetic algorithm (GA) based ANN (ANN-GA) approach were developed and compared with a novel hybrid self-organizing map (SOM) based ANN (SOM-ANN-GA) method for the prediction of lithology at a basin scale. This framework is demonstrated through a case study involving a complex multi-layered aquifer system in India, where well-log sites were clustered on the basis of sand-layer frequencies; within each cluster, subsurface layers were reclassified into four depth classes based on the maximum drilling depth. ANN models for each depth class were developed using each of the three approaches. Of the three, the hybrid SOM-ANN-GA models were able to recognize incomplete geologic pattern more reasonably, followed by ANN-GA and ANN-GDM models. It is concluded that the hybrid soft-computing framework can serve as a promising tool for characterizing lithology in groundwater basins with missing lithologic patterns.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
Energy characteristics of excitons in InGaN/GaN heterostructures
NASA Astrophysics Data System (ADS)
Usov, S. O.; Tsatsul'nikov, A. F.; Lundin, W. V.; Sakharov, A. V.; Zavarin, E. E.; Sinitsyn, M. A.; Ledentsov, N. N.
2008-04-01
The structure and optical properties of the heterostructures, which contain an ultra-thin InGaN layers with GaN or AlGaN barriers, grown by MOCVD method were investigated by photoluminescence and high resolution X-ray diffraction (HRXRD) tehnigue. The exciton localization energy, Urbah energy and charge carries activation energies were obtained from analysis of the temperature dependences of the photoluminescence spectra for the In-rich areas (QDs). In these structures the In-rich areas are shown to appear in ultrathin InGaN layers due to phase decomposition. That leads to exciton and carrier localization in fluctuation minima, which prevents them from tranport to nonradiative recombination centres. The indium composition in the InGaN QDs were obtained using theoretical model, which describes the electron transition energy as a function of In-rich areas parameters. The parameters such as deformation of InGaN/GaN region and layer thickness were determined from HRXRD. The suggested approach is supposed to be effective method for analysis of the optical properties of InGaN/GaN heterostructures.
Wanjun, Shuai; Xiuzhen, Dong; Feng, Fu; Fusheng, You; Xiaodong, Liu; Canhua, Xu
2005-01-01
It is found that Electrical Impedance Tomography(EIT) is promising in its application to the clinical image monitoring and that the Back-Projection algorithm of EIT can meet the preliminary requirements of the real-time monitoring through our work. In order to improve the computed speed and the imaged resolution, different ways of completing the algorithm were tried in this paper. Moreover, it is shown that the impedance change due to physiological saline with the concentration of not more than 50 milliliter 0.9% can be detected and imaged by our system. The above result is helpful for our further work of image monitoring by EIT.
NASA Astrophysics Data System (ADS)
Stühler, Sven; Fleissner, Florian; Eberhard, Peter
2016-11-01
We present an extended particle model for the discrete element method that on the one hand is tetrahedral in shape and on the other hand is capable to describe deformations. The deformations of the tetrahedral particles require a framework to interrelate the particle strains and resulting stresses. Hence, adaptations from the finite element method were used. This allows to link the two methods and to adequately describe material and simulation parameters separately in each scope. Due to the complexity arising of the non-spherical tetrahedral geometry, all possible contact combinations of vertices, edges, and surfaces must be considered by the used contact detection algorithm. The deformations of the particles make the contact evaluation even more challenging. Therefore, a robust contact detection algorithm based on an optimization approach that exploits temporal coherence is presented. This algorithm is suitable for general {R}^{{n}} simplices. An evaluation of the robustness of this algorithm is performed using a numerical example. In order to create complex geometries, bonds between these deformable particles are introduced. This coupling via the tetrahedra faces allows the simulation bonding of deformable bodies composed of several particles. Numerical examples are presented and validated with results that are obtained by the same simulation setup modeled with the finite element method. The intention of using these bonds is to be able to model fracture and material failure. Therefore, the bonds between the particles are not lasting and feature a release mechanism based on a predefined criterion.
Status of AlGaAs/GaAs heteroface solar cell technology
NASA Technical Reports Server (NTRS)
Rahilly, W. P.; Anspaugh, B.
1982-01-01
This paper reviews the various GaAs solar cell programs that have been and are now ongoing which are directed at bringing this particular technology to fruition. The discussion emphasizes space application - both concentrator and flat plate. The rationale for pursuing GaAs cell technology is given along with the different cell types (concentrator, flat plate), approaches to fabricate the devices, the hybrid cells under investigation and approaches to reduce cell mass are summarized. The outlook for the use of GaAs cell technology is given within the context for space application.
Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.
Tian, H; Liu, C; Gao, X D; Yao, W B
2013-03-01
Granulocyte colony-stimulating factor (G-CSF) is a cytokine widely used in cancer patients receiving high doses of chemotherapeutic drugs to prevent the chemotherapy-induced suppression of white blood cells. The production of recombinant G-CSF should be increased to meet the increasing market demand. This study aims to model and optimize the carbon source of auto-induction medium to enhance G-CSF production using artificial neural networks coupled with genetic algorithm. In this approach, artificial neural networks served as bioprocess modeling tools, and genetic algorithm (GA) was applied to optimize the established artificial neural network models. Two artificial neural network models were constructed: the back-propagation (BP) network and the radial basis function (RBF) network. The root mean square error, coefficient of determination, and standard error of prediction of the BP model were 0.0375, 0.959, and 8.49 %, respectively, whereas those of the RBF model were 0.0257, 0.980, and 5.82 %, respectively. These values indicated that the RBF model possessed higher fitness and prediction accuracy than the BP model. Under the optimized auto-induction medium, the predicted maximum G-CSF yield by the BP-GA approach was 71.66 %, whereas that by the RBF-GA approach was 75.17 %. These predicted values are in agreement with the experimental results, with 72.4 and 76.014 % for the BP-GA and RBF-GA models, respectively. These results suggest that RBF-GA is superior to BP-GA. The developed approach in this study may be helpful in modeling and optimizing other multivariable, non-linear, and time-variant bioprocesses.
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun
2015-01-01
This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity.
NASA Astrophysics Data System (ADS)
Chen, Junting; Lau, Vincent K. N.
2013-01-01
Max weighted queue (MWQ) control policy is a widely used cross-layer control policy that achieves queue stability and a reasonable delay performance. In most of the existing literature, it is assumed that optimal MWQ policy can be obtained instantaneously at every time slot. However, this assumption may be unrealistic in time varying wireless systems, especially when there is no closed-form MWQ solution and iterative algorithms have to be applied to obtain the optimal solution. This paper investigates the convergence behavior and the queue delay performance of the conventional MWQ iterations in which the channel state information (CSI) and queue state information (QSI) are changing in a similar timescale as the algorithm iterations. Our results are established by studying the stochastic stability of an equivalent virtual stochastic dynamic system (VSDS), and an extended Foster-Lyapunov criteria is applied for the stability analysis. We derive a closed form delay bound of the wireless network in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm with compensation to improve the tracking performance. We demonstrate that under some mild conditions, the proposed modified MWQ algorithm converges to the optimal MWQ control despite the time-varying CSI and QSI.
Abduallah, Yasser; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool. PMID:28243601
Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
Bennett, Herbert S.; Filliben, James J.
2002-01-01
A critical issue identified in both the technology roadmap from the Optoelectronics Industry Development Association and the roadmaps from the National Electronics Manufacturing Initiative, Inc. is the need for predictive computer simulations of processes, devices, and circuits. The goal of this paper is to respond to this need by representing the extensive amounts of theoretical data for transport properties in the multi-dimensional space of mole fractions of AlAs in Ga1−xAlxAs, dopant densities, and carrier densities in terms of closed form analytic expressions. Representing such data in terms of closed-form analytic expressions is a significant challenge that arises in developing computationally efficient simulations of microelectronic and optoelectronic devices. In this paper, we present a methodology to achieve the above goal for a class of numerical data in the bounded two-dimensional space of mole fraction of AlAs and dopant density. We then apply this methodology to obtain closed-form analytic expressions for the effective intrinsic carrier concentrations at 300 K in n-type and p-type Ga1−xAlxAs as functions of the mole fraction x of AlAs between 0.0 and 0.3. In these calculations, the donor density ND for n-type material varies between 1016 cm−3 and 1019 cm−3 and the acceptor density NA for p-type materials varies between 1016 cm−3 and 1020 cm−3. We find that p-type Ga1−xAlxAs presents much greater challenges for obtaining acceptable analytic fits whenever acceptor densities are sufficiently near the Mott transition because of increased scatter in the numerical computer results for solutions to the theoretical equations. The Mott transition region in p-type Ga1−xAlxAs is of technological significance for mobile wireless communications systems. This methodology and its associated principles, strategies, regression analyses, and graphics are expected to be applicable to other problems beyond the specific case of effective intrinsic carrier
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Belmonte, Irene; Barrecheguren, Miriam; López-Martínez, Rosa M; Esquinas, Cristina; Rodríguez, Esther; Miravitlles, Marc; Rodríguez-Frías, Francisco
2016-01-01
Background and objectives Alpha-1-antitrypsin deficiency (AATD) is associated with a high risk for the development of early-onset emphysema and liver disease. A large majority of subjects with severe AATD carry the ZZ genotype, which can be easily detected. Another rare pathologic variant, the Mmalton allele, causes a deficiency similar to that of the Z variant, but it is not easily recognizable and its detection seems to be underestimated. Therefore, we have included a rapid allele-specific genotyping assay for the detection of the Mmalton variant in the diagnostic algorithm of AATD used in our laboratory. The objective of this study was to test the usefulness of this new algorithm for Mmalton detection. Materials and methods We performed a retrospective revision of all AATD determinations carried out in our laboratory over 2 years using the new diagnostic algorithm. Samples with a phenotype showing one or two M alleles and AAT levels discordant with that phenotype were analyzed using the Mmalton allele-specific genotyping assay. Results We detected 49 samples with discordant AAT levels; 44 had the MM and five the MS phenotype. In nine of these samples, a single rare Mmalton variant was detected. During the study period, two family screenings were performed and four additional Mmalton variants were identified. Conclusion The incorporation of the Mmalton allele-specific genotyping assay in the diagnostic algorithm of AATD resulted in a faster and cheaper method to detect this allele and avoided a significant delay in diagnosis when a sequencing assay was required. This methodology can be adapted to other rare variants. Standardized algorithms are required to obtain conclusive data of the real incidence of rare AAT alleles in each region. PMID:27877030
Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft
NASA Technical Reports Server (NTRS)
Wells, Valana L.
1996-01-01
This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.
Davis, Gordon; Kobayashi, Masatomo; Phinney, Bernard O.; Lange, Theo; Croker, Steve J.; Gaskin, Paul; MacMillan, Jake
1999-01-01
[17-14C]-Labeled GA15, GA24, GA25, GA7, and 2,3-dehydro-GA9 were separately injected into normal, dwarf-1 (d1), and dwarf-5 (d5) seedlings of maize (Zea mays L.). Purified radioactive metabolites from the plant tissues were identified by full-scan gas chromatography-mass spectrometry and Kovats retention index data. The metabolites from GA15 were GA44, GA19, GA20, GA113, and GA15-15,16-ene (artifact?). GA24 was metabolized to GA19, GA20, and GA17. The metabolites from GA25 were GA17, GA25 16α,17-H2-17-OH, and HO-GA25 (hydroxyl position not determined). GA7 was metabolized to GA30, GA3, isoGA3 (artifact?), and trace amounts of GA7-diene-diacid (artifact?). 2,3-Dehydro-GA9 was metabolized to GA5, GA7 (trace amounts), 2,3-dehydro-GA10 (artifact?), GA31, and GA62. Our results provide additional in vivo evidence of a metabolic grid in maize (i.e. pathway convergence). The grid connects members of a putative, non-early 3,13-hydroxylation branch pathway to the corresponding members of the previously documented early 13-hydroxylation branch pathway. The inability to detect the sequence GA12 → GA15 → GA24 → GA9 indicates that the non-early 3,13-hydroxylation pathway probably plays a minor role in the origin of bioactive gibberellins in maize. PMID:10557253
Focusing through a turbid medium by amplitude modulation with genetic algorithm
NASA Astrophysics Data System (ADS)
Dai, Weijia; Peng, Ligen; Shao, Xiaopeng
2014-05-01
Multiple scattering of light in opaque materials such as white paint and human tissue forms a volume speckle field, will greatly reduce the imaging depth and degrade the imaging quality. A novel approach is proposed to focus light through a turbid medium using amplitude modulation with genetic algorithm (GA) from speckle patterns. Compared with phase modulation method, amplitude modulation approach, in which the each element of spatial light modulator (SLM) is either zero or one, is much easier to achieve. Theoretical and experimental results show that, the advantage of GA is more suitable for low the signal to noise ratio (SNR) environments in comparison to the existing amplitude control algorithms such as binary amplitude modulation. The circular Gaussian distribution model and Rayleigh Sommerfeld diffraction theory are employed in our simulations to describe the turbid medium and light propagation between optical devices, respectively. It is demonstrated that the GA technique can achieve a higher overall enhancement, and converge much faster than others, and outperform all algorithms at high noise. Focusing through a turbid medium has potential in the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Two Hybrid Algorithms for Multiple Sequence Alignment
NASA Astrophysics Data System (ADS)
Naznin, Farhana; Sarker, Ruhul; Essam, Daryl
2010-01-01
In order to design life saving drugs, such as cancer drugs, the design of Protein or DNA structures has to be accurate. These structures depend on Multiple Sequence Alignment (MSA). MSA is used to find the accurate structure of Protein and DNA sequences from existing approximately correct sequences. To overcome the overly greedy nature of the well known global progressive alignment method for multiple sequence alignment, we have proposed two different algorithms in this paper; one is using an iterative approach with a progressive alignment method (PAMIM) and the second one is using a genetic algorithm with a progressive alignment method (PAMGA). Both of our methods started with a "kmer" distance table to generate single guide-tree. In the iterative approach, we have introduced two new techniques: the first technique is to generate Guide-trees with randomly selected sequences and the second is of shuffling the sequences inside that tree. The output of the tree is a multiple sequence alignment which has been evaluated by the Sum of Pairs Method (SPM) considering the real value data from PAM250. In our second GA approach, these two techniques are used to generate an initial population and also two different approaches of genetic operators are implemented in crossovers and mutation. To test the performance of our two algorithms, we have compared these with the existing well known methods: T-Coffee, MUSCEL, MAFFT and Probcon, using BAliBase benchmarks. The experimental results show that the first algorithm works well for some situations, where other existing methods face difficulties in obtaining better solutions. The proposed second method works well compared to the existing methods for all situations and it shows better performance over the first one.
NASA Astrophysics Data System (ADS)
Hwang, Seho; Shin, Jehyun
2013-04-01
Shale gas evaluation process can be summarized as the selection of sweep spot intervals in the vertical borehole and determination of hydraulic fracturing zones in horizontal borehole. Brittleness index used in the selection of hydraulic fracturing interval is calculated from dynamic Young's modulus and Poisson's ratio of wireline logging and MWD/LWD data. Young's modulus and Poisson's ratio are calculated from the sonic and density log data, and therefore the MWD/LWD in the horizontal borehole should be included sonic log to estimate the dynamic elastic constants. This paper proposes a practical method to estimate the elastic moduli based on Passey's algorithm if we can't use the LWD sonic log in the horizontal borehole. To estimate the TOC (Total Organic Content) using the sonic-resistivity log, density-resistivity log, and neutron-resistivity log using Passey's algorithm we use the relationship between Delta log R values and core derived-LOM (Level of Maturity) data. Dynamic elastic constants in the horizontal well, i.e. in case of sweet spot zones, can be estimated using the relationships between P-wave velocity and elastic constants in the vertical well, and similarity between the calculated Delta log R values from sonic-resistivity log, density-resistivity log, and neutron-resistivity log, respectively. From two Passey's algorithms such as sonic-resistivity log, density-resistivity log relationships in the vertical well, we can derive the P-wave velocity equating the two Passey's algorithms based on the similarity. Then we can derive the dynamic elastic constants using the relationships between P-wave velocity and dynamic elastic constants. Finally we can estimate the brittleness index from the Young's modulus and Poisson's ratio. We expect that this practical method can be effectively applied if we can't use the LWD sonic logging data of the horizontal borehole.
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.
NASA Astrophysics Data System (ADS)
Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.
2014-12-01
Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.
Double Motor Coordinated Control Based on Hybrid Genetic Algorithm and CMAC
NASA Astrophysics Data System (ADS)
Cao, Shaozhong; Tu, Ji
A novel hybrid cerebellar model articulation controller (CMAC) and online adaptive genetic algorithm (GA) controller is introduced to control two Brushless DC motor (BLDCM) which applied in a biped robot. Genetic Algorithm simulates the random learning among the individuals of a group, and CMAC simulates the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments have been done in MATLAB/SIMULINK. Analysis among GA, hybrid GA-CMAC and CMAC feed-forward control is also given. The results prove that the torque ripple of the coordinated control system is eliminated by using the hybrid GA-CMAC algorithm.
NASA Astrophysics Data System (ADS)
Kerkhoff, A.; Ling, H.
2009-12-01
We apply Pareto genetic algorithm (GA) optimization to the design of antenna elements for use in the Long Wavelength Array (LWA), a large, low-frequency radio telescope currently under development. By manipulating antenna geometry, the Pareto GA simultaneously optimizes the received Galactic background or “sky” noise level and radiation patterns of the antenna over all frequencies. Geometrical constraints are handled explicitly in the GA in order to guarantee the realizability, and to impart control over the monetary cost of the generated designs. The antenna elements considered are broadband planar dipoles arranged horizontally over the ground. It is demonstrated that the Pareto GA approach generates a set of designs, which exhibit a wide range of trade-offs between the two design objectives, and satisfy all constraints. Multiple GA executions are performed to determine how antenna performance trade-offs are affected by different geometrical constraint values, feed impedance values, radiating element shapes and orientations, and ground conditions. Two different planar dipole antenna designs are constructed, and antenna input impedance and sky noise drift scan measurements are performed to validate the results of the GA.
Liu, Shu-Yen; Sheu, J K; Lin, Yu-Chuan; Chen, Yu-Tong; Tu, S J; Lee, M L; Lai, W C
2013-11-04
Hydrogen generation through water splitting by n-InGaN working electrodes with bias generated from GaAs solar cell was studied. Instead of using an external bias provided by power supply, a GaAs-based solar cell was used as the driving force to increase the rate of hydrogen production. The water-splitting system was tuned using different approaches to set the operating points to the maximum power point of the GaAs solar cell. The approaches included changing the electrolytes, varying the light intensity, and introducing the immersed ITO ohmic contacts on the working electrodes. As a result, the hybrid system comprising both InGaN-based working electrodes and GaAs solar cells operating under concentrated illumination could possibly facilitate efficient water splitting.
Photocorrosion metrology of photoluminescence emitting GaAs/AlGaAs heterostructures
NASA Astrophysics Data System (ADS)
Aithal, Srivatsa; Liu, Neng; Dubowski, Jan J.
2017-01-01
High sensitivity of the photoluminescence (PL) effect to surface states and chemical reactions on surfaces of PL emitting semiconductors has been attractive in monitoring photo-induced microstructuring of such materials. To address the etching at nano-scale removal rates, we have investigated mechanisms of photocorrosion of GaAs/Al0.35Ga0.65As heterostructures immersed either in deionized water or aqueous solution of NH4OH and excited with above-bandgap radiation. The difference in photocorrosion rates of GaAs and Al0.35Ga0.65As appeared weakly dependent on the bandgap energy of these materials, and the intensity of an integrated PL signal from GaAs quantum wells or a buried GaAs epitaxial layer was found dominated by the surface states and chemical reactivity of heterostructure surfaces revealed during the photocorrosion process. Under optimized photocorrosion conditions, the method allowed resolving a 1 nm thick GaAs sandwiched between Al0.35Ga0.65As layers. We demonstrate that this approach can be used as an inexpensive, and simple room temperature tool for post-growth diagnostics of interface locations in PL emitting quantum wells and other nano-heterostructures.
Hybrid UV Imager Containing Face-Up AlGaN/GaN Photodiodes
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Pain, Bedabrata
2005-01-01
A proposed hybrid ultraviolet (UV) image sensor would comprise a planar membrane array of face-up AlGaN/GaN photodiodes integrated with a complementary metal oxide/semiconductor (CMOS) readout-circuit chip. Each pixel in the hybrid image sensor would contain a UV photodiode on the AlGaN/GaN membrane, metal oxide/semiconductor field-effect transistor (MOSFET) readout circuitry on the CMOS chip underneath the photodiode, and a metal via connection between the photodiode and the readout circuitry (see figure). The proposed sensor design would offer all the advantages of comparable prior CMOS active-pixel sensors and AlGaN UV detectors while overcoming some of the limitations of prior (AlGaN/sapphire)/CMOS hybrid image sensors that have been designed and fabricated according to the methodology of flip-chip integration. AlGaN is a nearly ideal UV-detector material because its bandgap is wide and adjustable and it offers the potential to attain extremely low dark current. Integration of AlGaN with CMOS is necessary because at present there are no practical means of realizing readout circuitry in the AlGaN/GaN material system, whereas the means of realizing readout circuitry in CMOS are well established. In one variant of the flip-chip approach to integration, an AlGaN chip on a sapphire substrate is inverted (flipped) and then bump-bonded to a CMOS readout circuit chip; this variant results in poor quantum efficiency. In another variant of the flip-chip approach, an AlGaN chip on a crystalline AlN substrate would be bonded to a CMOS readout circuit chip; this variant is expected to result in narrow spectral response, which would be undesirable in many applications. Two other major disadvantages of flip-chip integration are large pixel size (a consequence of the need to devote sufficient area to each bump bond) and severe restriction on the photodetector structure. The membrane array of AlGaN/GaN photodiodes and the CMOS readout circuit for the proposed image sensor would
Visibility conflict resolution for multiple antennae and multi-satellites via genetic algorithm
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Hyun, Chung; Ahn, Hyosung; Wang, Semyung; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
Satellite mission control systems typically are operated by scheduling missions to the visibility between ground stations and satellites. The communication for the mission is achieved by interacting with satellite visibility and ground station support. Specifically, the satellite forms a cone-type visibility passing over a ground station, and the antennas of ground stations support the satellite. When two or more satellites pass by at the same time or consecutively, the satellites may generate a visibility conflict. As the number of satellites increases, solving visibility conflict becomes important issue. In this study, we propose a visibility conflict resolution algorithm of multi-satellites by using a genetic algorithm (GA). The problem is converted to scheduling optimization modeling. The visibility of satellites and the supports of antennas are considered as tasks and resources individually. The visibility of satellites is allocated to the total support time of antennas as much as possible for users to obtain the maximum benefit. We focus on a genetic algorithm approach because the problem is complex and not defined explicitly. The genetic algorithm can be applied to such a complex model since it only needs an objective function and can approach a global optimum. However, the mathematical proof of global optimality for the genetic algorithm is very challenging. Therefore, we apply a greedy algorithm and show that our genetic approach is reasonable by comparing with the performance of greedy algorithm application.
AlGaAs ridge laser with 33% wall-plug efficiency at 100 °C based on a design of experiments approach
NASA Astrophysics Data System (ADS)
Fecioru, Alin; Boohan, Niall; Justice, John; Gocalinska, Agnieszka; Pelucchi, Emanuele; Gubbins, Mark A.; Mooney, Marcus B.; Corbett, Brian
2016-04-01
Upcoming applications for semiconductor lasers present limited thermal dissipation routes demanding the highest efficiency devices at high operating temperatures. This paper reports on a comprehensive design of experiment optimisation for the epitaxial layer structure of AlGaAs based 840 nm lasers for operation at high temperature (100 °C) using Technology Computer-Aided Design software. The waveguide thickness, Al content, doping level, and quantum well thickness were optimised. The resultant design was grown and the fabricated ridge waveguides were optimised for carrier injection and, at 100 °C, the lasers achieve a total power output of 28 mW at a current of 50 mA, a total slope efficiency 0.82 W A-1 with a corresponding wall-plug efficiency of 33%.
NASA Astrophysics Data System (ADS)
Albert, J.
2016-12-01
Stochastic simulation of reaction networks is limited by two factors: accuracy and time. The Gillespie algorithm (GA) is a Monte Carlo-type method for constructing probability distribution functions (pdf) from statistical ensembles. Its accuracy is therefore a function of the computing time. The chemical master equation (CME) is a more direct route to obtaining the pdfs, however, solving the CME is generally very difficult for large networks. We propose a method that combines both approaches in order to simulate stochastically a part of a network. The network is first divided into two parts: A and B. Part A is simulated using the GA, while the solution of the CME for part B, with initial conditions imposed by simulation results of part A, is fed back into the GA. This cycle is then repeated a desired number of times. The advantage of this synergy between the two approaches is: 1) the GA needs to simulate only a part of the whole network, and hence is faster, and 2) the CME is necessarily simpler to solve, as the part of the network it describes is smaller. We will demonstrate on two examples - a positive feedback (genetic switch) and oscillations driven by a negative feedback - the utility of this approach.
1979-06-01
4.2.1 The Simulation Paradigm 4-1 4.2.2 The Engagement Scenarios 4-3 4.3 Algorithm Performance 4-3 Chapter 5 5.0 A Game Theoretic Model for Determining...Aircraft Evasion Strategies Against a Multiple Missile Threat 5 -1 5.1 Optimal Evasion Strategies Against Multiple Missiles: Part I - For Criteria with...a Fixed Terminal Time 5 -2 5.1.1 Introduction 5 -2 5.2 An Optimal Evasion Problem with Linear Dynamics and Quadratic Cost on Control 5 -4 5.2.1
Zhang, Rong; Verkruysse, Wim; Choi, Bernard; Viator, John A; Jung, Byungjo; Svaasand, Lars O; Aguilar, Guillermo; Nelson, J Stuart
2005-01-01
We present an initial study on applying genetic algorithms (GA) to retrieve human skin optical properties using visual reflectance spectroscopy (VRS). A three-layered skin model consisting of 13 parameters is first used to simulate skin and, through an analytical model based on optical diffusion theory, we study their independent effects on the reflectance spectra. Based on a preliminary analysis, nine skin parameters are chosen to be fitted by GA. The fitting procedure is applied first on simulated reflectance spectra with added white noise, and then on measured spectra from normal and port wine stain (PWS) human skin. A normalized residue of less than 0.005 is achieved for simulated spectra. In the case of measured spectra from human skin, the normalized residue is less than 0.01. Comparisons between applying GA and manual iteration (MI) fitting show that GA performed much better than the MI fitting method and can easily distinguish melanin concentrations for different skin types. Furthermore, the GA approach can lead to a reasonable understanding of the blood volume fraction and other skin properties, provided that the applicability of the diffusion approximation is satisfied.
A Moving Target Environment for Computer Configurations Using Genetic Algorithms
Crouse, Michael; Fulp, Errin W.
2011-10-31
Moving Target (MT) environments for computer systems provide security through diversity by changing various system properties that are explicitly defined in the computer configuration. Temporal diversity can be achieved by making periodic configuration changes; however in an infrastructure of multiple similarly purposed computers diversity must also be spatial, ensuring multiple computers do not simultaneously share the same configuration and potential vulnerabilities. Given the number of possible changes and their potential interdependencies discovering computer configurations that are secure, functional, and diverse is challenging. This paper describes how a Genetic Algorithm (GA) can be employed to find temporally and spatially diverse secure computer configurations. In the proposed approach a computer configuration is modeled as a chromosome, where an individual configuration setting is a trait or allele. The GA operates by combining multiple chromosomes (configurations) which are tested for feasibility and ranked based on performance which will be measured as resistance to attack. The result of successive iterations of the GA are secure configurations that are diverse due to the crossover and mutation processes. Simulations results will demonstrate this approach can provide at MT environment for a large infrastructure of similarly purposed computers by discovering temporally and spatially diverse secure configurations.
Global search algorithms in surface structure determination using photoelectron diffraction
NASA Astrophysics Data System (ADS)
Duncan, D. A.; Choi, J. I. J.; Woodruff, D. P.
2012-02-01
Three different algorithms to effect global searches of the variable-parameter hyperspace are compared for application to the determination of surface structure using the technique of scanned-energy mode photoelectron diffraction (PhD). Specifically, a new method not previously used in any surface science methods, the swarm-intelligence-based particle swarm optimisation (PSO) method, is presented and its results compared with implementations of fast simulated annealing (FSA) and a genetic algorithm (GA). These three techniques have been applied to experimental data from three adsorption structures that had previously been solved by standard trial-and-error methods, namely H2O on TiO2(110), SO2 on Ni(111) and CN on Cu(111). The performance of the three algorithms is compared to the results of a purely random sampling of the structural parameter hyperspace. For all three adsorbate systems, the PSO out-performs the other techniques as a fitting routine, although for two of the three systems studied the advantage relative to the GA and random sampling approaches is modest. The implementation of FSA failed to achieve acceptable fits in these tests.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.
2014-01-01
This paper aims to present an experimental investigation for optimum tribological behavior (wear depth and coefficient of friction) of electroless Ni-P-Cu coatings based on four process parameters using artificial bee colony algorithm. Experiments are carried out by utilizing the combination of three coating process parameters, namely, nickel sulphate, sodium hypophosphite, and copper sulphate, and the fourth parameter is postdeposition heat treatment temperature. The design of experiment is based on the Taguchi L27 experimental design. After coating, measurement of wear and coefficient of friction of each heat-treated sample is done using a multitribotester apparatus with block-on-roller arrangement. Both friction and wear are found to increase with increase of source of nickel concentration and decrease with increase of source of copper concentration. Artificial bee colony algorithm is successfully employed to optimize the multiresponse objective function for both wear depth and coefficient of friction. It is found that, within the operating range, a lower value of nickel concentration, medium value of hypophosphite concentration, higher value of copper concentration, and higher value of heat treatment temperature are suitable for having minimum wear and coefficient of friction. The surface morphology, phase transformation behavior, and composition of coatings are also studied with the help of scanning electron microscopy, X-ray diffraction analysis, and energy dispersed X-ray analysis, respectively. PMID:27382630
NASA Astrophysics Data System (ADS)
Xu, Huihui; Jiang, Mingyan; Li, Fei
2016-12-01
With the advances in three-dimensional (3-D) display technology, stereo conversion has attracted much attention as it can alleviate the problem of stereoscopic content shortage. In two-dimensional (2-D) to 3-D conversion, the most difficult and challenging problem is depth estimation from a single image. In order to recover a perceptually plausible depth map from a single image, a depth estimation algorithm based on a data-driven method and depth cues is presented. Based on the human visual system mechanism, which is sensitive to the foreground object, this study classifies the image into one of two classes, i.e., nonobject image and object image, and then leverages different strategies on the basis of image type. The proposed strategies efficiently extract the depth information from different images. Moreover, depth image-based rendering technology is utilized to generate stereoscopic views by combining 2-D images with their depth maps. The proposed method is also suitable for 2-D to 3-D video conversion. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and producing visually pleasing and realistic 3-D views.
Roy, Supriyo; Sahoo, Prasanta
2014-01-01
This paper aims to present an experimental investigation for optimum tribological behavior (wear depth and coefficient of friction) of electroless Ni-P-Cu coatings based on four process parameters using artificial bee colony algorithm. Experiments are carried out by utilizing the combination of three coating process parameters, namely, nickel sulphate, sodium hypophosphite, and copper sulphate, and the fourth parameter is postdeposition heat treatment temperature. The design of experiment is based on the Taguchi L27 experimental design. After coating, measurement of wear and coefficient of friction of each heat-treated sample is done using a multitribotester apparatus with block-on-roller arrangement. Both friction and wear are found to increase with increase of source of nickel concentration and decrease with increase of source of copper concentration. Artificial bee colony algorithm is successfully employed to optimize the multiresponse objective function for both wear depth and coefficient of friction. It is found that, within the operating range, a lower value of nickel concentration, medium value of hypophosphite concentration, higher value of copper concentration, and higher value of heat treatment temperature are suitable for having minimum wear and coefficient of friction. The surface morphology, phase transformation behavior, and composition of coatings are also studied with the help of scanning electron microscopy, X-ray diffraction analysis, and energy dispersed X-ray analysis, respectively.
Lee, Junghoon; Zheng, Yili; Doerschuk, Peter C
2006-01-01
In a cryo electron microscopy experiment, the data is noisy 2-D projection images of the 3-D electron scattering intensity where the orientation of the projections is not known. In previous work we have developed a solution for this problem based on a maximum likelihood estimator that is computed by an expectation maximization algorithm. In the expectation maximization algorithm the expensive step is the expectation which requires numerical evaluation of 3- or 5-dimensional integrations of a square matrix of dimension equal to the number of Fourier series coefficients used to describe the 3-D reconstruction. By taking advantage of the rotational properties of spherical harmonics, we can reduce the integrations of a matrix to integrations of a scalar. The key property is that a rotated spherical harmonic can be expressed as a linear combination of the other harmonics of the same order and the weights in the linear combination factor so that each of the three factors is a function of only one of the Euler angles describing the orientation of the projection. Numerical example of the reconstructions is provided based on Nudaurelia Omega Capensis virus.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
Genetic algorithms with permutation coding for multiple sequence alignment.
Ben Othman, Mohamed Tahar; Abdel-Azim, Gamil
2013-08-01
Multiple sequence alignment (MSA) is one of the topics of bio informatics that has seriously been researched. It is known as NP-complete problem. It is also considered as one of the most important and daunting tasks in computational biology. Concerning this a wide number of heuristic algorithms have been proposed to find optimal alignment. Among these heuristic algorithms are genetic algorithms (GA). The GA has mainly two major weaknesses: it is time consuming and can cause local minima. One of the significant aspects in the GA process in MSA is to maximize the similarities between sequences by adding and shuffling the gaps of Solution Coding (SC). Several ways for SC have been introduced. One of them is the Permutation Coding (PC). We propose a hybrid algorithm based on genetic algorithms (GAs) with a PC and 2-opt algorithm. The PC helps to code the MSA solution which maximizes the gain of resources, reliability and diversity of GA. The use of the PC opens the area by applying all functions over permutations for MSA. Thus, we suggest an algorithm to calculate the scoring function for multiple alignments based on PC, which is used as fitness function. The time complexity of the GA is reduced by using this algorithm. Our GA is implemented with different selections strategies and different crossovers. The probability of crossover and mutation is set as one strategy. Relevant patents have been probed in the topic.
The performance improvement of SRAF placement rules using GA optimization
NASA Astrophysics Data System (ADS)
Xu, Yan; Zhang, Bidan; Wang, Changan; Wilkinson, William; Bolton, John
2016-10-01
In this paper, genetic algorithm (GA) method is applied to both positive and negative Sub Resolution Assist Features (SRAF) insertion rules. Simulation results and wafer data demonstrated that the optimized SRAF rules helped resolve the SRAF printing issues while dramatically improving the process window of the working layer. To find out the best practice to place the SRAF, model-based SRAF (MBSRAF), rule-based SRAF (RBSRAF) with pixelated OPC simulation and RBSRAF with GA method are thoroughly compared. The result shows the apparent advantage of RBSRAF with GA method.
NASA Astrophysics Data System (ADS)
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
A probabilistic approach to spectral graph matching.
Egozi, Amir; Keller, Yosi; Guterman, Hugo
2013-01-01
Spectral Matching (SM) is a computationally efficient approach to approximate the solution of pairwise matching problems that are np-hard. In this paper, we present a probabilistic interpretation of spectral matching schemes and derive a novel Probabilistic Matching (PM) scheme that is shown to outperform previous approaches. We show that spectral matching can be interpreted as a Maximum Likelihood (ML) estimate of the assignment probabilities and that the Graduated Assignment (GA) algorithm can be cast as a Maximum a Posteriori (MAP) estimator. Based on this analysis, we derive a ranking scheme for spectral matchings based on their reliability, and propose a novel iterative probabilistic matching algorithm that relaxes some of the implicit assumptions used in prior works. We experimentally show our approaches to outperform previous schemes when applied to exhaustive synthetic tests as well as the analysis of real image sequences.
Magnetoresistance Study in a GaAs/InGaAs/GaAs Delta-Doped Quantum Well
NASA Astrophysics Data System (ADS)
Hasbun, J. E.
1997-03-01
The magnetoresistance of a GaAs/Ga_0.87In_0.13As/GaAs with an electron concentration of N_s=6.3x10^11cm-2 is calculated at low temperature for a magnetic field range of 2-30 tesla and low electric field. The results obtained for the magnetotransport are compared with the experimental work of Herfort et al.(J. Herfort, K.-J. Friedland, H. Kostial, and R. Hey, Appl. Phys. Lett. V66, 23 (1995)). While the logitudinal magnetoresistance agrees reasonably well with experiment, the Hall resistance slope reflects a classical shape; however, its second derivative seems to show oscillations that are consistent with the Hall effect plateaus seen experimentally. Albeit with a much higher electron concentration, earlier calculationsfootnote J. Hasbun, APS Bull. V41, 419 (1996) for an Al_0.27Ga_0.73/GaAs /Al_0.27Ga_0.73As quantum well shows similar behavior. This work has been carried out with the use of a quantum many body approach employed in earlier work(J. Hasbun, APS Bull. V41, 1659 (1996)).
Pruning Neural Networks with Distribution Estimation Algorithms
Cantu-Paz, E
2003-01-15
This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.
Crystal growth of device quality GaAs in space
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1984-01-01
The crystal growth, device processing and device related properties and phenomena of GaAs are investigated. Our GaAs research evolves about these key thrust areas. The overall program combines: (1) studies of crystal growth on novel approaches to engineering of semiconductor materials (i.e., GaAs and related compounds); (2) investigation and correlation of materials properties and electronic characteristics on a macro- and microscale; (3) investigation of electronic properties and phenomena controlling device applications and device performance. The ground based program is developed which would insure successful experimentation with and eventually processing of GaAs in a near zero gravity environment.
Rausch, Tobias; Thomas, Alun; Camp, Nicola J.; Cannon-Albright, Lisa A.; Facelli, Julio C.
2008-01-01
This paper describes a novel algorithm to analyze genetic linkage data using pattern recognition techniques and genetic algorithms (GA). The method allows a search for regions of the chromosome that may contain genetic variations that jointly predispose individuals for a particular disease. The method uses correlation analysis, filtering theory and genetic algorithms (GA) to achieve this goal. Because current genome scans use from hundreds to hundreds of thousands of markers, two versions of the method have been implemented. The first is an exhaustive analysis version that can be used to visualize, explore, and analyze small genetic data sets for two marker correlations; the second is a GA version, which uses a parallel implementation allowing searches of higher-order correlations in large data sets. Results on simulated data sets indicate that the method can be informative in the identification of major disease loci and gene-gene interactions in genome-wide linkage data and that further exploration of these techniques is justified. The results presented for both variants of the method show that it can help genetic epidemiologists to identify promising combinations of genetic factors that might predispose to complex disorders. In particular, the correlation analysis of IBD expression patterns might hint to possible gene-gene interactions and the filtering might be a fruitful approach to distinguish true correlation signals from noise. PMID:18547558
NASA Astrophysics Data System (ADS)
Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela
2016-01-01
Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.
GA-Based Computer-Aided Electromagnetic Design of Two-Phase SRM for Compressor Drives
NASA Astrophysics Data System (ADS)
Kano, Yoshiaki; Kosaka, Takashi; Matsui, Nobuyuki
This paper presents an approach to Genetic Algorithm (GA)-based computer-aided autonomous electromagnetic design of 2-phase Switched Reluctance Motor drives. The proposed drive is designed for compressor drives in low-priced refrigerators as an alternative to existing brushless DC motors drives with rare-earth magnets. In the proposed design approach, three GA loops work to optimize the lamination design so as to meet the requirements for the target application under the given constraints while simultaneously fine-tuning the control parameters. To achieve the design optimization within an acceptable CPU-time, the repeated-calculation required to obtain fitness evaluation in the proposed approach does not use FEM, but consists of geometric flux tube-based non-linear magnetic analysis and a dynamic simulator based on an analytical expression of the magnetizing curves obtained from the non-linear magnetic analysis. The design results show the proposed approach can autonomously find a feasible design solution of SRM drive for the target application from huge search space. The experimental studies using a 2-phase 8/6 prototype manufactured in accordance with the optimized design parameters show the validity of the proposed approach.
NASA Astrophysics Data System (ADS)
Narwadi, Teguh; Subiyanto
2017-03-01
The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-01-01
In this paper, a simple and controllable “wet pulse annealing” technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm2 V−1 s−1; Ion/Ioff ratio ≈ 108; reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances. PMID:27198067
NASA Astrophysics Data System (ADS)
Dahan, N.; Jehl, Z.; Hildebrandt, T.; Greffet, J.-J.; Guillemoles, J.-F.; Lincot, D.; Naghavi, N.
2012-11-01
Improving the optical management is a key issue for ultrathin based solar cells performance. It can be accomplished either by trapping the light in the active layer or by decreasing the parasitic absorptions in the cell. We calculate the absorption of the different layers of Cu(In,Ga)Se2 (CIGSe) based solar cell and propose to increase the absorption in the CIGSe layer by optimizing three parameters. First, by increasing the transmitted light to the cell using a textured surface of ZnO:Al front contact which functions as a broadband antireflection layer. Second, by replacing the CdS/i-ZnO buffer layers with ZnS/ZnMgO buffer layers which have higher energy bandgaps. Third, by replacing the Mo back contact with a higher reflective metal, such as silver or gold. Calculations show that modifying these layers improves the total absorption by 32% in a 0.5 μm thick CIGSe absorber. These predicted improvements of the short circuit current are confirmed experimentally.
NASA Astrophysics Data System (ADS)
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-05-01
In this paper, a simple and controllable “wet pulse annealing” technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm2 V‑1 s‑1 Ion/Ioff ratio ≈ 108 reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances.
Systematic investigation on topological properties of layered GaS and GaSe under strain
An, Wei; Tian, Guang-Shan; Wu, Feng; Jiang, Hong; Li, Xin-Zheng
2014-08-28
The topological properties of layered β-GaS and ε-GaSe under strain are systematically investigated by ab initio calculations with the electronic exchange-correlation interactions treated beyond the generalized gradient approximation (GGA). Based on the GW method and the Tran-Blaha modified Becke-Johnson potential approach, we find that while ε-GaSe can be strain-engineered to become a topological insulator, β-GaS remains a trivial one even under strong strain, which is different from the prediction based on GGA. The reliability of the fixed volume assumption rooted in nearly all the previous calculations is discussed. By comparing to strain calculations with optimized inter-layer distance, we find that the fixed volume assumption is qualitatively valid for β-GaS and ε-GaSe, but there are quantitative differences between the results from the fixed volume treatment and those from more realistic treatments. This work indicates that it is risky to use theoretical approaches like GGA that suffer from the band gap problem to address physical properties, including, in particular, the topological nature of band structures, for which the band gap plays a crucial role. In the latter case, careful calibration against more reliable methods like the GW approach is strongly recommended.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
NASA Astrophysics Data System (ADS)
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-03-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
Genetic algorithm optimization for focusing through turbid media in noisy environments.
Conkey, Donald B; Brown, Albert N; Caravaca-Aguirre, Antonio M; Piestun, Rafael
2012-02-27
We introduce genetic algorithms (GA) for wavefront control to focus light through highly scattering media. We theoretically and experimentally compare GAs to existing phase control algorithms and show that GAs are particularly advantageous in low signal-to-noise environments.
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-01-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased. PMID:27556534
NASA Astrophysics Data System (ADS)
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-08-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased.
Abbasitabar, Fatemeh; Zare-Shahabadi, Vahid
2017-04-01
Risk assessment of chemicals is an important issue in environmental protection; however, there is a huge lack of experimental data for a large number of end-points. The experimental determination of toxicity of chemicals involves high costs and time-consuming process. In silico tools such as quantitative structure-toxicity relationship (QSTR) models, which are constructed on the basis of computational molecular descriptors, can predict missing data for toxic end-points for existing or even not yet synthesized chemicals. Phenol derivatives are known to be aquatic pollutants. With this background, we aimed to develop an accurate and reliable QSTR model for the prediction of toxicity of 206 phenols to Tetrahymena pyriformis. A multiple linear regression (MLR)-based QSTR was obtained using a powerful descriptor selection tool named Memorized_ACO algorithm. Statistical parameters of the model were 0.72 and 0.68 for Rtraining(2) and Rtest(2), respectively. To develop a high-quality QSTR model, classification and regression tree (CART) was employed. Two approaches were considered: (1) phenols were classified into different modes of action using CART and (2) the phenols in the training set were partitioned to several subsets by a tree in such a manner that in each subset, a high-quality MLR could be developed. For the first approach, the statistical parameters of the resultant QSTR model were improved to 0.83 and 0.75 for Rtraining(2) and Rtest(2), respectively. Genetic algorithm was employed in the second approach to obtain an optimal tree, and it was shown that the final QSTR model provided excellent prediction accuracy for the training and test sets (Rtraining(2) and Rtest(2) were 0.91 and 0.93, respectively). The mean absolute error for the test set was computed as 0.1615.
Pereira, Keith; Osiason, Adam; Salsamendi, Jason
2015-01-01
The role of interventional radiology in the overall management of patients on dialysis continues to expand. In patients with end-stage renal disease (ESRD), the use of tunneled dialysis catheters (TDCs) for hemodialysis has become an integral component of treatment plans. Unfortunately, long-term use of TDCs often leads to infections, acute occlusions, and chronic venous stenosis, depletion of the patient's conventional access routes, and prevention of their recanalization. In such situations, the progressive loss of venous access sites prompts a systematic approach to alternative sites to maximize patient survival and minimize complications. In this review, we discuss the advantages and disadvantages of each vascular access option. We illustrate the procedures with case histories and images from our own experience at a highly active dialysis and transplant center. We rank each vascular access option and classify them into tiers based on their relative degrees of effectiveness. The conventional approaches are the most preferred, followed by alternative approaches and finally the salvage approaches. It is our intent to have this review serve as a concise and informative reference for physicians managing patients who need vascular access for hemodialysis. PMID:26167389
75 FR 48550 - Amendment of Class E Airspace; Pine Mountain, GA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
... Federal Aviation Administration 14 CFR Part 71 Amendment of Class E Airspace; Pine Mountain, GA AGENCY... Airspace at Pine Mountain, GA, to accommodate the Standard Instrument Approach Procedures (SIAPs) developed... proposed rulemaking to amend Class E airspace at Pine Mountain, GA (75 FR 28765) Docket No....
Exemplar-Based Policy with Selectable Strategies and its Optimization Using GA
NASA Astrophysics Data System (ADS)
Ikeda, Kokolo; Kobayashi, Shigenobu; Kita, Hajime
As an approach for dynamic control problems and decision making problems, usually formulated as Markov Decision Processes (MDPs), we focus direct policy search (DPS), where a policy is represented by a model with parameters, and the parameters are optimized so as to maximize the evaluation function by applying the parameterized policy to the problem. In this paper, a novel framework for DPS, an exemplar-based policy optimization using genetic algorithm (EBP-GA) is presented and analyzed. In this approach, the policy is composed of a set of virtual exemplars and a case-based action selector, and the set of exemplars are selected and evolved by a genetic algorithm. Here, an exemplar is a real or virtual, free-styled and suggestive information such as ``take the action A at the state S'' or ``the state S1 is better to attain than S2''. One advantage of EBP-GA is the generalization and localization ability for policy expression, based on case-based reasoning methods. Another advantage is that both the introduction of prior knowledge and the extraction of knowledge after optimization are relatively straightforward. These advantages are confirmed through the proposal of two new policy expressions, experiments on two different problems and their analysis.
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms
Ortegon, Patricia; Poot-Hernández, Augusto C.; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case. PMID:25973143
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms.
Ortegon, Patricia; Poot-Hernández, Augusto C; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case.
NASA Astrophysics Data System (ADS)
Krishna, Hemanth; Kumar, Hemantha; Gangadharan, Kalluvalappil
2016-06-01
A magneto rheological (MR) fluid damper offers cost effective solution for semiactive vibration control in an automobile suspension. The performance of MR damper is significantly depends on the electromagnetic circuit incorporated into it. The force developed by MR fluid damper is highly influenced by the magnetic flux density induced in the fluid flow gap. In the present work, optimization of electromagnetic circuit of an MR damper is discussed in order to maximize the magnetic flux density. The optimization procedure was proposed by genetic algorithm and design of experiments techniques. The result shows that the fluid flow gap size less than 1.12 mm cause significant increase of magnetic flux density.
NASA Astrophysics Data System (ADS)
Küpers, Hanno; Bastiman, Faebian; Luna, Esperanza; Somaschini, Claudio; Geelhaar, Lutz
2017-02-01
We present a novel two-step growth approach for the Ga-assisted growth of GaAs nanowires (NWs) by molecular beam epitaxy on Si. In the first step only Ga is deposited for the controlled formation of Ga droplets and in the second step NWs are grown from these droplets at lower Ga flux. This variation of the Ga flux leads to a decoupling of the formation of Ga droplets and the formation of NWs. Thus, the total density of crystal objects can be varied by only changing the parameters of the first step. Also, the NW length distribution of such an ensemble is more homogeneous. Annealing of the droplets can improve the homogeneity even further. The resulting GaAs NW ensembles are ideally suited for the subsequent growth of shell structures. Finally, our new approach enabled us to explore the nucleation of different crystalline objects and analyze the impact of the droplet size on the vertical yield of NWs in detail.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
Moonchai, Sompop; Madlhoo, Weeranuch; Jariyachavalit, Kanidtha; Shimizu, Hiroshi; Shioya, Suteaki; Chauvatcharin, Somchai
2005-11-01
The effect of pH and temperature on cell growth and bacteriocin production in Lactococcus lactis C7 was investigated in order to optimize the production of bacteriocin. The study showed that the bacteriocin production was growth-associated, but declined after reaching the maximum titer. The decrease of bacteriocin was caused by a cell-bound protease. Maximum bacteriocin titer was obtained at pH 5.5 and at 22 degrees C. In order to obtain a global optimized solution for production of bacteriocin, the optimal temperature for bacteriocin production was further studied. Mathematical models were developed for cell growth, substrate consumption, lactic acid production and bacteriocin production. A Differential Evolution algorithm was used both to estimate the model parameters from the experimental data and to compute a temperature profile for maximizing the final bacteriocin titer and bacteriocin productivity. This simulation showed that maximum bacteriocin production was obtained at the optimal temperature profile, starting at 30 degrees C and terminating at 22 degrees C, which was validated by experiment. This temperature profile yielded 20% higher maximum bacteriocin productivity than that obtained at a constant temperature of 22 degrees C, although the total amount of bacteriocin obtained was slightly decreased.
First Principles Electronic Structure of Mn doped GaAs, GaP, and GaN Semiconductors
Schulthess, Thomas C; Temmerman, Walter M; Szotek, Zdzislawa; Svane, Axel; Petit, Leon
2007-01-01
We present first-principles electronic structure calculations of Mn doped III-V semiconductors based on the local spin-density approximation (LSDA) as well as the self-interaction corrected local spin density method (SIC-LSD). We find that it is crucial to use a self-interaction free approach to properly describe the electronic ground state. The SIC-LSD calculations predict the proper electronic ground state configuration for Mn in GaAs, GaP, and GaN. Excellent quantitative agreement with experiment is found for magnetic moment and p-d exchange in (GaMn)As. These results allow us to validate commonly used models for magnetic semiconductors. Furthermore, we discuss the delicate problem of extracting binding energies of localized levels from density functional theory calculations. We propose three approaches to take into account final state effects to estimate the binding energies of the Mn-d levels in GaAs. We find good agreement between computed values and estimates from photoemisison experiments.
First-principles electronic structure of Mn-doped GaAs, GaP, and GaN semiconductors
NASA Astrophysics Data System (ADS)
Schulthess, T. C.; Temmerman, W. M.; Szotek, Z.; Svane, A.; Petit, L.
2007-04-01
We present first-principles electronic structure calculations of Mn-doped III-V semiconductors based on the local spin-density approximation (LSDA) as well as the self-interaction corrected local spin-density method (SIC-LSD). We find that it is crucial to use a self-interaction free approach to properly describe the electronic ground state. The SIC-LSD calculations predict the proper electronic ground state configuration for Mn in GaAs, GaP, and GaN. Excellent quantitative agreement with experiment is found for the magnetic moment and p-d exchange in (GaMn)As. These results allow us to validate commonly used models for magnetic semiconductors. Furthermore, we discuss the delicate problem of extracting binding energies of localized levels from density functional theory calculations. We propose three approaches to take into account final state effects to estimate the binding energies of the Mn d levels in GaAs. We find good agreement between computed values and estimates from photoemission experiments.
Carrier capture dynamics of single InGaAs/GaAs quantum-dot layers
Chauhan, K. N.; Riffe, D. M.; Everett, E. A.; Kim, D. J.; Yang, H.; Shen, F. K.
2013-05-28
Using 800 nm, 25-fs pulses from a mode locked Ti:Al{sub 2}O{sub 3} laser, we have measured the ultrafast optical reflectivity of MBE-grown, single-layer In{sub 0.4}Ga{sub 0.6}As/GaAs quantum-dot (QD) samples. The QDs are formed via two-stage Stranski-Krastanov growth: following initial InGaAs deposition at a relatively low temperature, self assembly of the QDs occurs during a subsequent higher temperature anneal. The capture times for free carriers excited in the surrounding GaAs (barrier layer) are as short as 140 fs, indicating capture efficiencies for the InGaAs quantum layer approaching 1. The capture rates are positively correlated with initial InGaAs thickness and annealing temperature. With increasing excited carrier density, the capture rate decreases; this slowing of the dynamics is attributed to Pauli state blocking within the InGaAs quantum layer.
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)
2002-01-01
Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.
Genetic algorithms for modelling and optimisation
NASA Astrophysics Data System (ADS)
McCall, John
2005-12-01
Genetic algorithms (GAs) are a heuristic search and optimisation technique inspired by natural evolution. They have been successfully applied to a wide range of real-world problems of significant complexity. This paper is intended as an introduction to GAs aimed at immunologists and mathematicians interested in immunology. We describe how to construct a GA and the main strands of GA theory before speculatively identifying possible applications of GAs to the study of immunology. An illustrative example of using a GA for a medical optimal control problem is provided. The paper also includes a brief account of the related area of artificial immune systems.
Using a hybrid genetic algorithm and fuzzy logic for metabolic modeling
Yen, J.; Lee, B.; Liao, J.C.
1996-12-31
The identification of metabolic systems is a complex task due to the complexity of the system and limited knowledge about the model. Mathematical equations and ODE`s have been used to capture the structure of the model, and the conventional optimization techniques have been used to identify the parameters of the model. In general, however, a pure mathematical formulation of the model is difficult due to parametric uncertainty and incomplete knowledge of mechanisms. In this paper, we propose a modeling approach that (1) uses fuzzy rule-based model to augment algebraic enzyme models that are incomplete, and (2) uses a hybrid genetic algorithm to identify uncertain parameters in the model. The hybrid genetic algorithm (GA) integrates a GA with the simplex method in functional optimization to improve the GA`s convergence rate. We have applied this approach to modeling the rate of three enzyme reactions in E. coli central metabolism. The proposed modeling strategy allows (1) easy incorporation of qualitative insights into a pure mathematical model and (2) adaptive identification and optimization of key parameters to fit system behaviors observed in biochemical experiments.
NASA Astrophysics Data System (ADS)
Islam, Sirajul; Talukdar, Bipul
2016-09-01
A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
Sreevatsan, Srinand; Bookout, Jack B.; Ringpis, Fidel M.; Pottathil, Mridula R.; Marshall, David J.; De Arruda, Monika; Murvine, Christopher; Fors, Lance; Pottathil, Raveendran M.; Barathur, Raj R.
1998-01-01
This study was designed to analyze the feasibility and validity of using Cleavase Fragment Length Polymorphism (CFLP) analysis as an alternative to DNA sequencing for high-throughput screening of hepatitis C virus (HCV) genotypes in a high-volume molecular pathology laboratory setting. By using a 244-bp amplicon from the 5′ untranslated region of the HCV genome, 61 clinical samples received for HCV reverse transcription-PCR (RT-PCR) were genotyped by this method. The genotype frequencies assigned by the CFLP method were 44.3% for type 1a, 26.2% for 1b, 13.1% for type 2b, and 5% type 3a. The results obtained by nucleotide sequence analysis provided 100% concordance with those obtained by CFLP analysis at the major genotype level, with resolvable differences as to subtype designations for five samples. CFLP analysis-derived HCV genotype frequencies also concurred with the national estimates (N. N. Zein et al., Ann. Intern. Med. 125:634–639, 1996). Reanalysis of 42 of these samples in parallel in a different research laboratory reproduced the CFLP fingerprints for 100% of the samples. Similarly, the major subtype designations for 19 samples subjected to different incubation temperature-time conditions were also 100% reproducible. Comparative cost analysis for genotyping of HCV by line probe assay, CFLP analysis, and automated DNA sequencing indicated that the average cost per amplicon was lowest for CFLP analysis, at $20 (direct costs). On the basis of these findings we propose that CFLP analysis is a robust, sensitive, specific, and an economical method for large-scale screening of HCV-infected patients for alpha interferon-resistant HCV genotypes. The paper describes an algorithm that uses as a reflex test the RT-PCR-based qualitative screening of samples for HCV detection and also addresses genotypes that are ambiguous. PMID:9650932
Malvasi, Antonio; Bochicchio, Mario; Vaira, Lucia; Longo, Antonella; Pacella, Elena; Tinelli, Andrea
2014-07-01
The determination of fetal head position can be useful in labor to predict the success of labor management, especially in case of malpositions. Malpositions are abnormal positions of the vertex of the fetal head and account for the large part of indication for cesarean sections for dystocic labor. The occiput posterior position occurs in 15-25% of patients before labor at term and, however, most occiput posterior presentations rotate during labor, so that the incidence of occiput posterior at vaginal birth is approximately 5-7%. Persistence of the occiput posterior position is associated with higher rate of interventions and with maternal and neonatal complications and the knowledge of the exact position of the fetal head is of paramount importance prior to any operative vaginal delivery, for both the safe positioning of the instrument that may be used (i.e. forceps versus vacuum) and for its successful outcome. Ultrasound (US) diagnosed occiput posterior position during labor can predict occiput posterior position at birth. By these evidences, the time requested for fetal head descent and the position in the birth canal, had an impact on the diagnosis of labor progression or arrested labor. To try to reduce this pitfalls, authors developed a new algorithm, applied to intrapartum US and based on suitable US pictures, that sets out, in detail, the quantitative evaluation, in degrees, of the occiput posterior position of the fetal head in the pelvis and the birth canal, respectively, in the first and second stage of labor. Authors tested this computer system in a settle of patients in labor.
Optimal field-scale groundwater remediation using neural networks and the genetic algorithm
Rogers, L.L.; Dowla, F.U.; Johnson, V.M.
1993-05-01
We present a new approach for field-scale nonlinear management of groundwater remediation. First, an artificial neural network (ANN) is trained to predict the outcome of a groundwater transport simulation. Then a genetic algorithm (GA) searches through possible pumping realizations, evaluating the fitness of each with a prediction from the trained ANN. Traditional approaches rely on optimization algorithms requiring sequential calls of the groundwater transport simulation. Our approach processes the transport simulations in parallel and ``recycles`` the knowledge base of these simulations, greatly reducing the computational and real-time burden, often the primary impediment to developing field-scale management models. We present results from a Superfund site suggesting that such management techniques can reduce cleanup costs by over a hundred million dollars.
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
Russo, G; Attili, A; Battistoni, G; Bertrand, D; Bourhaleb, F; Cappucci, F; Ciocca, M; Mairani, A; Milian, F M; Molinelli, S; Morone, M C; Muraro, S; Orts, T; Patera, V; Sala, P; Schmitt, E; Vivaldo, G; Marchetto, F
2016-01-07
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
State diagnostics of RTD based on nanoscale multilayered AlGaAs heterostructures
NASA Astrophysics Data System (ADS)
Makeev, M. O.; Meshkov, S. A.; Sinyakin, V. Yu
2016-08-01
In the present work the problems of technical diagnostics of RTD based on nanoscale multilayered AlGaAs heterostructures are being solved. The technique and the algorithms of RTD functionality region developing are being considered.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
NASA Astrophysics Data System (ADS)
Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.
2016-02-01
Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As
Genetic algorithm-based form error evaluation
NASA Astrophysics Data System (ADS)
Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng
2007-07-01
Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.
High nitrogen pressure solution growth of GaN
NASA Astrophysics Data System (ADS)
Bockowski, Michal
2014-10-01
Results of GaN growth from gallium solution under high nitrogen pressure are presented. Basic of the high nitrogen pressure solution (HNPS) growth method is described. A new approach of seeded growth, multi-feed seed (MFS) configuration, is demonstrated. The use of two kinds of seeds: free-standing hydride vapor phase epitaxy GaN (HVPE-GaN) obtained from metal organic chemical vapor deposition (MOCVD)-GaN/sapphire templates and free-standing HVPE-GaN obtained from the ammonothermally grown GaN crystals, is shown. Depending on the seeds’ structural quality, the differences in the structural properties of pressure grown material are demonstrated and analyzed. The role and influence of impurities, like oxygen and magnesium, on GaN crystals grown from gallium solution in the MFS configuration is presented. The properties of differently doped GaN crystals are discussed. An application of the pressure grown GaN crystals as substrates for electronic and optoelectronic devices is reported.
GaN Based Electronics And Their Applications
NASA Astrophysics Data System (ADS)
Ren, Fan
2002-03-01
The Group III-nitrides were initially researched for their promise to fill the void for a blue solid state light emitter. Electronic devices from III-nitrides have been a more recent phenomenon. The thermal conductivity of GaN is three times that of GaAs. For high power or high temperature applications, good thermal conductivity is imperative for heat removal or sustained operation at elevated temperatures. The development of III-N and other wide bandgap technologies for high temperature applications will likely take place at the expense of competing technologies, such as silicon-on-insulator (SOI), at moderate temperatures. At higher temperatures (>300°C), novel devices and components will become possible. The automotive industry will likely be one of the largest markets for such high temperature electronics. One of the most noteworthy advantages for III-N materials over other wide bandgap semiconductors is the availability of AlGaN/GaN and InGaN/GaN heterostructures. A 2-dimensional electron gas (2DEG) has been shown to exist at the AlGaN/GaN interface, and heterostructure field effect transistors (HFETs) from these materials can exhibit 2DEG mobilities approaching 2000 cm2 / V?s at 300K. Power handling capabilities of 12 W/mm appear feasible, and extraordinary large signal performance has already been demonstrated, with a current state-of-the-art of >10W/mm at X-band. In this talk, high speed and high temperature AlGaN/GaN HEMTs as well as MOSHEMTs, high breakdown voltage GaN (>6KV) and AlGaN (9.7 KV) Schottky diodes, and their applications will be presented.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2016-10-01
In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Magnetic field-dependent of binding energy in GaN/InGaN/GaN spherical QDQW nanoparticles
NASA Astrophysics Data System (ADS)
El Ghazi, Haddou; Jorio, Anouar; Zorkani, Izeddine
2013-10-01
Simultaneous study of magnetic field and impurity's position effects on the ground-state shallow-donor binding energy in GaN│InGaN│GaN (core│well│shell) spherical quantum dot-quantum well (SQDQW) as a function of the ratio of the inner and the outer radius is reported. The calculations are investigated within the framework of the effective-mass approximation and an infinite deep potential describing the quantum confinement effect. A Ritz variational approach is used taking into account of the electron-impurity correlation and the magnetic field effect in the trial wave-function. It appears that the binding energy depends strongly on the external magnetic field, the impurity's position and the structure radius. It has been found that: (i) the magnetic field effect is more marked in large layer than in thin layer and (ii) it is more pronounced in the spherical layer center than in its extremities.
An inverted AlGaAs/GaAs patterned-Ge tunnel junction cascade concentrator solar cell
Venkatasubramanian, R. )
1993-01-01
This report describes work to develop inverted-grown Al[sub 0.34]Ga[sub 0.66]As/GaAs cascades. Several significant developments are reported on as follows: (1) The AM1.5 1-sun total-area efficiency of the top Al[sub 0.34]Ga[sub 0.66]As cell for the cascade was improved from 11.3% to 13.2% (NREL measurement [total-area]). (2) The cycled'' organometallic vapor phase epitaxy growth (OMVPE) was studied in detail utilizing a combination of characterization techniques including Hall-data, photoluminescence, and secondary ion mass spectroscopy. (3) A technique called eutectic-metal-bonding (EMB) was developed by strain-free mounting of thin GaAs-AlGaAs films (based on lattice-matched growth on Ge substrates and selective plasma etching of Ge substrates) onto Si carrier substrates. Minority-carrier lifetime in an EMB GaAs double-heterostructure was measured as high as 103 nsec, the highest lifetime report for a freestanding GaAs thin film. (4) A thin-film, inverted-grown GaAs cell with a 1-sun AM1.5 active-area efficiency of 20.3% was obtained. This cell was eutectic-metal-bonded onto Si. (5) A thin-film inverted-grown, Al[sub 0.34]Ga[sub 0.66]As/GaAs cascade with AM1.5 efficiency of 19.9% and 21% at 1-sun and 7-suns, respectively, was obtained. This represents an important milestone in the development of an AlGaAs/GaAs cascade by OMVPE utilizing a tunnel interconnect and demonstrates a proof-of-concept for the inverted-growth approach.
A genetic ensemble approach for gene-gene interaction identification
2010-01-01
Background It has now become clear that gene-gene interactions and gene-environment interactions are ubiquitous and fundamental mechanisms for the development of complex diseases. Though a considerable effort has been put into developing statistical models and algorithmic strategies for identifying such interactions, the accurate identification of those genetic interactions has been proven to be very challenging. Methods In this paper, we propose a new approach for identifying such gene-gene and gene-environment interactions underlying complex diseases. This is a hybrid algorithm and it combines genetic algorithm (GA) and an ensemble of classifiers (called genetic ensemble). Using this approach, the original problem of SNP interaction identification is converted into a data mining problem of combinatorial feature selection. By collecting various single nucleotide polymorphisms (SNP) subsets as well as environmental factors generated in multiple GA runs, patterns of gene-gene and gene-environment interactions can be extracted using a simple combinatorial ranking method. Also considered in this study is the idea of combining identification results obtained from multiple algorithms. A novel formula based on pairwise double fault is designed to quantify the degree of complementarity. Conclusions Our simulation study demonstrates that the proposed genetic ensemble algorithm has comparable identification power to Multifactor Dimensionality Reduction (MDR) and is slightly better than Polymorphism Interaction Analysis (PIA), which are the two most popular methods for gene-gene interaction identification. More importantly, the identification results generated by using our genetic ensemble algorithm are highly complementary to those obtained by PIA and MDR. Experimental results from our simulation studies and real world data application also confirm the effectiveness of the proposed genetic ensemble algorithm, as well as the potential benefits of combining identification
GA-based stable control for a class of underactuated mechanical systems
NASA Astrophysics Data System (ADS)
Liu, Diantong; Guo, Weiping; Yi, Jianqiang
2005-12-01
A nonlinear dynamic model of a class of underactuated mechanical systems was built using the Lagrangian method. Some system properties such as the system passivity were analyzed. A GA(Genetic Algorithms)-based stable control algorithm was proposed for the class of underactuated mechanical systems. The Lyapunov stability theory and system properties were utilized to guarantee the system's asymptotic stability to its equilibrium. A real-valued GA was used to adjust the parameters of a stable controller to improve the system performance. An underactuated double-pendulum-type overhead crane system is used to validate the proposed control algorithm. Simulation results illustrate the validity of proposed control algorithm under different conditions.
Adaptive phase aberration correction based on imperialist competitive algorithm.
Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R
2014-01-01
We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.
NASA Astrophysics Data System (ADS)
Anderson, Jonathan W.; Lee, Kyoung-Keun; Piner, Edwin L.
2012-03-01
Gallium nitride (GaN) has enormous potential for applications in high electron mobility transistors (HEMTs) used in RF and power devices. Intrinsic device properties such as high electron mobility, high breakdown voltage, very high current density, electron confinement in a narrow channel, and high electron velocity in the 2-dimensional electron gas of the HEMT structure are due in large part to the wide band gap of this novel semiconductor material system. This presentation discusses the properties of GaN that make it superior to other semiconductor materials, and outlines the research that will be undertaken in a new program at Texas State University to advance GaN HEMT technology. This program's aim is to further innovate the exceptional performance of GaN through improved material growth processes and epitaxial structure design.
Satellite remote sensing offers synoptic and frequent monitoring of optical water quality parameters, such as chlorophyll-a, turbidity, and colored dissolved organic matter (CDOM). While traditional satellite algorithms were developed for the open ocean, these algorithms often do...
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Parameter Estimation for the Phenomenological Model of Hysteresis Using Efficient Genetic Algorithm
NASA Astrophysics Data System (ADS)
Xiaomin, Xue; Ling, Zhang; Qing, Sun
2010-05-01
Magnetorheological (MR) dampers provide an ideal candidate for diverse applications including structural vibration suppression as well as shock absorption and vibration control in vehicle systems in recent years. To understand the dynamic characteristics of MR damper, many researchers have utilized some useful models to illustrate the behavior of the MR damper, which are Bingham model, polynomial curve-fitting model using sigmoid function, Bouc-Wen phenomenological model, etc. The Bouc-Wen differential model is one of the most widely accepted phenomenological models due to its capability to capture a range of shapes of hysteretic cycles which can match the behavior of a MR damper. As a result, it is the tendency of this model being used to describe hysteretic phenomena. However, this model has an obvious shortcoming, its complicated equation expression, which seems to be a big problem for multi-parameter estimation simultaneously. With the development of the computer technology, a powerful multi-processing algorithm is found out possible to solve these complicated problems efficiently. Thus, this paper devotes to the use of an efficient Genetic Algorithm (GA) to estimate the multi-parameter of the Bouc-Wen model efficiently. A modified GA is adopted and improved by effective methods including adaptive genetic operators and appropriate termination criteria. The modified GA has greatly improved the performance of the traditional GA. Finally, experimental data of the MR damper are used to verify the proposed approaches with satisfactory parameter estimation results and highly efficient computational capability. Meanwhile, the algorithm and the results proposed herein also throw light on development and characterization of other novel smart dampers.
Investigation of range extension with a genetic algorithm
Austin, A. S., LLNL
1998-03-04
Range optimization is one of the tasks associated with the development of cost- effective, stand-off, air-to-surface munitions systems. The search for the optimal input parameters that will result in the maximum achievable range often employ conventional Monte Carlo techniques. Monte Carlo approaches can be time-consuming, costly, and insensitive to mutually dependent parameters and epistatic parameter effects. An alternative search and optimization technique is available in genetic algorithms. In the experiments discussed in this report, a simplified platform motion simulator was the fitness function for a genetic algorithm. The parameters to be optimized were the inputs to this motion generator and the simulator`s output (terminal range) was the fitness measure. The parameters of interest were initial launch altitude, initial launch speed, wing angle-of-attack, and engine ignition time. The parameter values the GA produced were validated by Monte Carlo investigations employing a full-scale six-degree-of-freedom (6 DOF) simulation. The best results produced by Monte Carlo processes using values based on the GA derived parameters were within - 1% of the ranges generated by the simplified model using the evolved parameter values. This report has five sections. Section 2 discusses the motivation for the range extension investigation and reviews the surrogate flight model developed as a fitness function for the genetic algorithm tool. Section 3 details the representation and implementation of the task within the genetic algorithm framework. Section 4 discusses the results. Section 5 concludes the report with a summary and suggestions for further research.
Honarvar, Mohammad; Sahebjavaher, Ramin; Rohling, Robert; Salcudean, Septimiu
2017-03-22
In quantitative elastography, maps of the mechanical properties of soft tissue, or elastograms, are calculated from the measured displacement data by solving an inverse problem. The model assumptions have a significant effect on elastograms. Motivated by the high sensitivity of imaging results to the model assumptions for in-vivo Magnetic Resonance Elastography (MRE) of the prostate, we compared elastograms obtained with four different methods. Two FEM-based methods developed by our group were compared with two other commonly used methods, Local Frequency Estimator (LFE) and curl-based Direct Inversion (c-DI). All the methods assume a linear isotropic elastic model, but the methods vary in their assumptions, such as local homogeneity or incompressibility, and in the specific approach used. We report results using simulations, phantom, ex-vivo and in-vivo data. The simulation and phantom studies show, for regions with an inclusion, the contrast to noise ratio (CNR) for the FEM methods is about 3-5 times higher than the CNR for the LFE and c-DI and the RMS error is about half. The LFE method produces very smooth results (i.e. low CNR) and is fast. c-DI is faster than the FEM methods but it's only accurate in areas where elasticity variations are small. The artifacts resulting from the homogeneity assumption in c-DI is detrimental in regions with large variations. The ex-vivo and in-vivo results also show similar trends as the simulation and phantom studies. The c-FEM method is more sensitive to noise compared to the mixed-FEM due to higher orders derivatives. This is especially evident at lower frequencies where the wave curvature is smaller and it is more prone to such error, causing a discrepancy in the absolute values between the mixed-FEM and c-FEM in our in-vivo results. In general, the proposed finite element methods use fewer simplifying assumptions and outperform the other methods but they are computationally more expensive.
Mass spectrometry cancer data classification using wavelets and genetic algorithm.
Nguyen, Thanh; Nahavandi, Saeid; Creighton, Douglas; Khosravi, Abbas
2015-12-21
This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners.
Self-interaction effects in (Ga,Mn)As and (Ga,Mn)N
NASA Astrophysics Data System (ADS)
Filippetti, Alessio; Spaldin, Nicola A.; Sanvito, Stefano
2005-02-01
The electronic structures of Mn-doped zincblende GaAs and wurtzite GaN are calculated using both standard local-spin density functional theory (LSDA), and a novel pseudopotential self-interaction-corrected approach (pseudo-SIC), able to account for the effects of strong correlation. We find that, as expected, the self-interaction is not strong in (Ga,Mn)As, because the Fermi energy is crossed by weakly correlated As p-Mn d hybridized bands and the Mn 3d character is distributed through the whole valence band manifold. This result validates the extensive literature of LSDA studies on (Ga,Mn)As, including the conclusion that the ferromagnetism is hole-mediated. In contrast, the LSDA gives a qualitatively incorrect band structure for (Ga,Mn)N, which is characterized by localized Mn 3d bands with very strong self-interaction. Our pseudo-SIC calculations show a highly confined hole just above the Fermi energy in the majority band manifold. Such a band arrangement is consistent with (although by no means conclusive evidence for) a recent suggestion [Phys. Rev. B 033203 (2002)] that formation of Zhang-Rice magnetic polarons is responsible for hole-mediated ferromagnetism in (Ga,Mn)N.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid intelligent control scheme for air heating system using fuzzy logic and genetic algorithm
Thyagarajan, T.; Shanmugam, J.; Ponnavaikko, M.; Panda, R.C.
2000-01-01
Fuzzy logic provides a means for converting a linguistic control strategy, based on expert knowledge, into an automatic control strategy. Its performance depends on membership function and rule sets. In the traditional Fuzzy Logic Control (FLC) approach, the optimal membership is formed by trial-and-error method. In this paper, Genetic Algorithm (GA) is applied to generate the optimal membership function of FLC. The membership function thus obtained is utilized in the design of the Hybrid Intelligent Control (HIC) scheme. The investigation is carried out for an Air Heat System (AHS), an important component of drying process. The knowledge of the optimum PID controller designed, is used to develop the traditional FLC scheme. The computational difficulties in finding optimal membership function of traditional FLC is alleviated using GA In the design of HIC scheme. The qualitative performance indices are evaluated for the three control strategies, namely, PID, FLC and HIC. The comparison reveals that the HIC scheme designed based on the hybridization of FLC with GA performs better. Moreover, GA is found to be an effective tool for designing the FLC, eliminating the human interface required to generate the membership functions.
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
NASA Astrophysics Data System (ADS)
Hongesombut, Komsan; Mitani, Yasunori; Tsuji, Kiichiro
Fuzzy logic control has been applied to various applications in power systems. Its control rules and membership functions are typically obtained by trial and error methods or experience knowledge. Proposed here is the application of a micro-genetic algorithm (micro-GA) to simultaneously design optimal membership functions and control rules for STATCOM. First, we propose a simple approach to extract membership functions and fuzzy logic control rules based on observed signals. Then a proposed GA will be applied to optimize membership functions and its control rules. To validate the effectiveness of the proposed approach, several simulation studies have been performed on a multimachine power system. Simulation results show that the proposed fuzzy logic controller with STATCOM can effectively and robustly enhance the damping of oscillations.
Peeled film GaAs solar cell development
NASA Technical Reports Server (NTRS)
Wilt, D. M.; Thomas, R. D.; Bailey, S. G.; Brinker, D. J.; Deangelo, F. L.
1990-01-01
Thin-film, single-crystal gallium arsenide (GaAs) solar cells could exhibit a specific power approaching 700 W/kg including coverglass. A simple process has been described whereby epitaxial GaAs layers are peeled from a reusable substrate. This process takes advantage of the extreme selectivity of the etching rate of aluminum arsenide (AlAs) over GaAs in dilute hydrofluoric acid. The feasibility of using the peeled film technique to fabricate high-efficiency, low-mass GaAs solar cells is presently demonstrated. A peeled film GaAs solar cell was successfully produced. The device, although fractured and missing the aluminum gallium arsenide window and antireflective coating, had a Voc of 874 mV and a fill factor of 68 percent under AM0 illumination.