Satellite mission scheduling algorithm based on GA
NASA Astrophysics Data System (ADS)
Sun, Baolin; Mao, Lifei; Wang, Wenxiang; Xie, Xing; Qin, Qianqing
2007-11-01
The Satellite Mission Scheduling problem (SMS) involves scheduling tasks to be performed by a satellite, where new task requests can arrive at any time, non-deterministically, and must be scheduled in real-time. This paper describes a new Satellite Mission Scheduling problem based on Genetic Algorithm (SMSGA). In this paper, it investigates algorithmic approaches for determining an optimal or near-optimal sequence of tasks, allocated to a satellite payload over time, with dynamic tasking considerations. The simulation results show that the proposed approach is effective and efficient in applications to the real problems.
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
A Test Scheduling Algorithm Based on Two-Stage GA
NASA Astrophysics Data System (ADS)
Yu, Y.; Peng, X. Y.; Peng, Y.
2006-10-01
In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.
GA-based discrete dynamic programming approach for scheduling in FMS environments.
Yang, J B
2001-01-01
The paper presents a new genetic algorithm (GA)-based discrete dynamic programming (DDP) approach for generating static schedules in a flexible manufacturing system (FMS) environment. This GA-DDP approach adopts a sequence-dependent schedule generation strategy, where a GA is employed to generate feasible job sequences and a series of discrete dynamic programs are constructed to generate legal schedules for a given sequence of jobs. In formulating the GA, different performance criteria could be easily included. The developed DDF algorithm is capable of identifying locally optimized partial schedules and shares the computation efficiency of dynamic programming. The algorithm is designed In such a way that it does not suffer from the state explosion problem inherent in pure dynamic programming approaches in FMS scheduling. Numerical examples are reported to illustrate the approach. PMID:18244848
A genetic algorithm approach to recognition and data mining
Punch, W.F.; Goodman, E.D.; Min, Pei
1996-12-31
We review here our use of genetic algorithm (GA) and genetic programming (GP) techniques to perform {open_quotes}data mining,{close_quotes} the discovery of particular/important data within large datasets, by finding optimal data classifications using known examples. Our first experiments concentrated on the use of a K-nearest neighbor algorithm in combination with a GA. The GA selected weights for each feature so as to optimize knn classification based on a linear combination of features. This combined GA-knn approach was successfully applied to both generated and real-world data. We later extended this work by substituting a GP for the GA. The GP-knn could not only optimize data classification via linear combinations of features but also determine functional relationships among the features. This allowed for improved performance and new information on important relationships among features. We review the effectiveness of the overall approach on examples from biology and compare the effectiveness of the GA and GP.
Ameliorated GA approach for base station planning
NASA Astrophysics Data System (ADS)
Wang, Andong; Sun, Hongyue; Wu, Xiaomin
2011-10-01
In this paper, we aim at locating base station (BS) rationally to satisfy the most customs by using the least BSs. An ameliorated GA is proposed to search for the optimum solution. In the algorithm, we mesh the area to be planned according to least overlap length derived from coverage radius, bring into isometric grid encoding method to represent BS distribution as well as its number and develop select, crossover and mutation operators to serve our unique necessity. We also construct our comprehensive object function after synthesizing coverage ratio, overlap ratio, population and geographical conditions. Finally, after importing an electronic map of the area to be planned, a recommended strategy draft would be exported correspondingly. We eventually import HongKong, China to simulate and yield a satisfactory solution.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Economic Dispatch Using Genetic Algorithm Based Hybrid Approach
Tahir Nadeem Malik; Aftab Ahmad; Shahab Khushnood
2006-07-01
Power Economic Dispatch (ED) is vital and essential daily optimization procedure in the system operation. Present day large power generating units with multi-valves steam turbines exhibit a large variation in the input-output characteristic functions, thus non-convexity appears in the characteristic curves. Various mathematical and optimization techniques have been developed, applied to solve economic dispatch (ED) problem. Most of these are calculus-based optimization algorithms that are based on successive linearization and use the first and second order differentiations of objective function and its constraint equations as the search direction. They usually require heat input, power output characteristics of generators to be of monotonically increasing nature or of piecewise linearity. These simplifying assumptions result in an inaccurate dispatch. Genetic algorithms have used to solve the economic dispatch problem independently and in conjunction with other AI tools and mathematical programming approaches. Genetic algorithms have inherent ability to reach the global minimum region of search space in a short time, but then take longer time to converge the solution. GA based hybrid approaches get around this problem and produce encouraging results. This paper presents brief survey on hybrid approaches for economic dispatch, an architecture of extensible computational framework as common environment for conventional, genetic algorithm and hybrid approaches based solution for power economic dispatch, the implementation of three algorithms in the developed framework. The framework tested on standard test systems for its performance evaluation. (authors)
Calibration of visual model for space manipulator with a hybrid LM-GA algorithm
NASA Astrophysics Data System (ADS)
Jiang, Wensong; Wang, Zhongyu
2016-01-01
A hybrid LM-GA algorithm is proposed to calibrate the camera system of space manipulator to improve its locational accuracy. This algorithm can dynamically fuse the Levenberg-Marqurdt (LM) algorithm and Genetic Algorithm (GA) together to minimize the error of nonlinear camera model. LM algorithm is called to optimize the initial camera parameters that are generated by genetic process previously. Iteration should be stopped if the optimized camera parameters meet the accuracy requirements. Otherwise, new populations are generated again by GA and optimized afresh by LM algorithm until the optimal solutions meet the accuracy requirements. A novel measuring machine of space manipulator is designed to on-orbit dynamic simulation and precision test. The camera system of space manipulator, calibrated by hybrid LM-GA algorithm, is used for locational precision test in this measuring instrument. The experimental results show that the mean composite errors are 0.074 mm for hybrid LM-GA camera calibration model, 1.098 mm for LM camera calibration model, and 1.202 mm for GA camera calibration model. Furthermore, the composite standard deviations are 0.103 mm for the hybrid LM-GA camera calibration model, 1.227 mm for LM camera calibration model, and 1.351 mm for GA camera calibration model. The accuracy of hybrid LM-GA camera calibration model is more than 10 times higher than that of other two methods. All in all, the hybrid LM-GA camera calibration model is superior to both the LM camera calibration model and GA camera calibration model.
Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)
NASA Astrophysics Data System (ADS)
Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman
2011-12-01
In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.
Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)
NASA Astrophysics Data System (ADS)
Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman
2012-01-01
In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.
System engineering approach to GPM retrieval algorithms
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do
NASA Astrophysics Data System (ADS)
Kim, Eunsu; Kim, Manseok; Kim, Jong-Wook
In this paper, a humanoid is simulated and implemented to walk up and down a staircase using the blending polynomial and genetic algorithm (GA). Both ascending and descending a staircase are scheduled by four steps. Each step mimics natural gait of human being and is easy to analyze and implement. Optimal trajectories of ten motors in a lower extremity of a humanoid are rigorously computed to simultaneously satisfy stability condition, walking constraints, and energy efficiency requirements. As an optimization method, GA is applied to search optimal trajectory parameters in blending polynomials. The feasibility of this approach will be validated by simulation with a small humanoid robot.
NASA Astrophysics Data System (ADS)
Igeta, Hideki; Hasegawa, Mikio
Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.
Naresh-Kumar, G. Trager-Cowan, C.; Vilalta-Clemente, A.; Morales, M.; Ruterana, P.; Pandey, S.; Cavallini, A.; Cavalcoli, D.; Skuridina, D.; Vogt, P.; Kneissl, M.; Behmenburg, H.; Giesen, C.; Heuken, M.; Gamarra, P.; Di Forte-Poisson, M. A.; Patriarche, G.; Vickridge, I.
2014-12-15
We report on our multi–pronged approach to understand the structural and electrical properties of an InAl(Ga)N(33nm barrier)/Al(Ga)N(1nm interlayer)/GaN(3μm)/ AlN(100nm)/Al{sub 2}O{sub 3} high electron mobility transistor (HEMT) heterostructure grown by metal organic vapor phase epitaxy (MOVPE). In particular we reveal and discuss the role of unintentional Ga incorporation in the barrier and also in the interlayer. The observation of unintentional Ga incorporation by using energy dispersive X–ray spectroscopy analysis in a scanning transmission electron microscope is supported with results obtained for samples with a range of AlN interlayer thicknesses grown under both the showerhead as well as the horizontal type MOVPE reactors. Poisson–Schrödinger simulations show that for high Ga incorporation in the Al(Ga)N interlayer, an additional triangular well with very small depth may be exhibited in parallel to the main 2–DEG channel. The presence of this additional channel may cause parasitic conduction and severe issues in device characteristics and processing. Producing a HEMT structure with InAlGaN as the barrier and AlGaN as the interlayer with appropriate alloy composition may be a possible route to optimization, as it might be difficult to avoid Ga incorporation while continuously depositing the layers using the MOVPE growth method. Our present work shows the necessity of a multicharacterization approach to correlate structural and electrical properties to understand device structures and their performance.
Ancestral genome inference using a genetic algorithm approach.
Gao, Nan; Yang, Ning; Tang, Jijun
2013-01-01
Recent advancement of technologies has now made it routine to obtain and compare gene orders within genomes. Rearrangements of gene orders by operations such as reversal and transposition are rare events that enable researchers to reconstruct deep evolutionary histories. An important application of genome rearrangement analysis is to infer gene orders of ancestral genomes, which is valuable for identifying patterns of evolution and for modeling the evolutionary processes. Among various available methods, parsimony-based methods (including GRAPPA and MGR) are the most widely used. Since the core algorithms of these methods are solvers for the so called median problem, providing efficient and accurate median solver has attracted lots of attention in this field. The "double-cut-and-join" (DCJ) model uses the single DCJ operation to account for all genome rearrangement events. Because mathematically it is much simpler than handling events directly, parsimony methods using DCJ median solvers has better speed and accuracy. However, the DCJ median problem is NP-hard and although several exact algorithms are available, they all have great difficulties when given genomes are distant. In this paper, we present a new algorithm that combines genetic algorithm (GA) with genomic sorting to produce a new method which can solve the DCJ median problem in limited time and space, especially in large and distant datasets. Our experimental results show that this new GA-based method can find optimal or near optimal results for problems ranging from easy to very difficult. Compared to existing parsimony methods which may severely underestimate the true number of evolutionary events, the sorting-based approach can infer ancestral genomes which are much closer to their true ancestors. The code is available at http://phylo.cse.sc.edu. PMID:23658708
Algorithmic approach in the diagnosis of uveitis
Rathinam, S R; Babu, Manohar
2013-01-01
Uveitis is caused by disorders of diverse etiologies including wide spectrum of infectious and non-infectious causes. Often clinical signs are less specific and shared by different diseases. On several occasions, uveitis represents diseases that are developing elsewhere in the body and ocular signs may be the first evidence of such systemic diseases. Uveitis specialists need to have a thorough knowledge of all entities and their work up has to be systematic and complete including systemic and ocular examinations. Creating an algorithmic approach on critical steps to be taken would help the ophthalmologist in arriving at the etiological diagnosis. PMID:23803476
DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.
The royal road for genetic algorithms: Fitness landscapes and GA performance
Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)
1991-01-01
Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.
Evolutionary Algorithms Approach to the Solution of Damage Detection Problems
NASA Astrophysics Data System (ADS)
Salazar Pinto, Pedro Yoajim; Begambre, Oscar
2010-09-01
In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.
The mGA1.0: A common LISP implementation of a messy genetic algorithm
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Kerzic, Travis
1990-01-01
Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
NASA Astrophysics Data System (ADS)
Sulaiman, Salina; Bade, Abdullah; Lee, Rechard; Tanalol, Siti Hasnah
2014-07-01
Mass Spring Model (MSM) is a highly efficient model in terms of calculations and easy implementation. Mass, spring stiffness coefficient and damping constant are three major components of MSM. This paper focuses on identifying the coefficients of spring stiffness and damping constant using automated tuning method by optimization in generating human liver model capable of responding quickly. To achieve the objective two heuristic approaches are used, namely Simulated Annealing (SA) and Genetic Algorithm (GA) on the human liver model data set. The properties of the mechanical heart, which are taken into consideration, are anisotropy and viscoelasticity. Optimization results from SA and GA are then implemented into the MSM to model two human hearts, each with its SA or GA construction parameters. These techniques are implemented while making FEM construction parameters as benchmark. Step size response of both models are obtained after MSMs were solved using Fourth Order Runge-Kutta (RK4) to compare the elasticity response of both models. Remodelled time using manual calculation methods was compared against heuristic optimization methods of SA and GA in showing that model with automatic construction is more realistic in terms of realtime interaction response time. Liver models generated using SA and GA optimization techniques are compared with liver model from manual calculation. It shows that the reconstruction time required for 1000 repetitions of SA and GA is faster than the manual method. Meanwhile comparison between construction time of SA and GA model indicates that model SA is faster than GA with varying rates of time 0.110635 seconds/1000 repetitions. Real-time interaction of mechanical properties is dependent on rate of time and speed of remodelling process. Thus, the SA and GA have proven to be suitable in enhancing realism of simulated real-time interaction in liver remodelling.
A coupled model tree (MT) genetic algorithm (GA) scheme for biofouling assessment in pipelines.
Opher, Tamar; Ostfeld, Avi
2011-11-15
A computerized learning algorithm was developed for assessing the extent of biofouling formations on the inner surfaces of water supply pipelines. Four identical pipeline experimental systems with four different types of inlet waters were set up as part of a large cooperative project between academia and industry in Israel on biofouling modeling, prediction, and prevention in pipeline systems. Samples were taken periodically for hydraulic, chemical, and biological analyses. Biofilm sampling was done using Robbins devices, carrying stainless steel coupons. An MT-GA, a hybrid model combining model trees (MTs) and genetic algorithms (GAs) in which the sampled input data are selected by the proposed methodology, was developed. The method outcome is a set of empirical linear rules which form a model tree, iteratively optimized by a GA and verified using the dataset resulting from the empirical field studies. Good correlations were achieved between modeled and observed cell coverage area within the biofilm. Sensitivity analysis was conducted by testing the model's response to changes in: (1) the biofilm measure used as output (target) variable; (2) variability of GA parameters; and (3) input attributes. The proposed methodology provides a new tool for biofouling assessment in pipelines. PMID:21978570
A random walk approach to quantum algorithms.
Kendon, Vivien M
2006-12-15
The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems. PMID:17090467
Algorithmic crystal chemistry: A cellular automata approach
Krivovichev, S. V.
2012-01-15
Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).
Machine learning algorithms for damage detection: Kernel-based approaches
NASA Astrophysics Data System (ADS)
Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.
2016-02-01
This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.
Beyond Hydrodynamics via a Fluid Element PIC algorithm, GaPH
NASA Astrophysics Data System (ADS)
Bateson, William; Hewett, Dennis; Lambert, Michael
1996-11-01
For strongly-driven gas and plasma systems, issues of interpenetration and turbulence have led to difficulties with fluid models. For example, a Maxwell distribution within the finite volume could miss the interpenetration and shear regions between two fluids. To address these and other issues, we have extended our Grid and Particle Hydrodynamics (GaPH), a fluid element PIC code, beyond the initial high-precision, 1-D collisionless solutions[2] to 2-D with both binary and viscous drag collisions. The GaPH algorithm still aggressively probes for emerging phase space features by fitting new "particles" to the "hydrodynamic" evolution of individual particles and aggressively merges to preserves economy if interesting features fail to materialize. Recent extensions add collisonal diffusion to the hydrodynamics. Through these and other extensions, GaPH approximates Boltzmann transport thus leaving the fluid model assumption of a local Maxwell distribution behind. [1] This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48 and by Sandia National Laboratory under Contract DE-AC04-94AL85000. [2] "Beyond Hydrodynamics via Fluid Element Particle-In-Cell", WB Bateson and DW Hewett, (submitted J. Comp. Phys. July 1996).
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
NASA Technical Reports Server (NTRS)
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
Wang, Yan; Xi, Chengyu; Zhang, Shuai; Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
A new algorithmic approach for fingers detection and identification
NASA Astrophysics Data System (ADS)
Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad
2013-03-01
Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.
A mild reduction phosphidation approach to nanocrystalline GaP
NASA Astrophysics Data System (ADS)
Chen, Luyang; Luo, Tao; Huang, Mingxing; Gu, Yunle; Shi, Liang; Qian, Yitai
2004-12-01
Nanocrystalline gallium phosphide (GaP) has been prepared through a reduction-phosphidation by using Ga, PCl 3 as gallium and phosphorus sources and metallic sodium as reductant at 350 °C. The XRD pattern can be indexed as cublic GaP with the lattice constant of a=5.446 Å. The TEM image shows particle-like polycrystals and flake-like single crystals. The PL spectrum exhibits one peak at 330 nm for the as-prepared nanocrystalline GaP.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design
NASA Astrophysics Data System (ADS)
Liu, Li; Olszewski, Piotr; Goh, Pong-Chai
A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
NASA Astrophysics Data System (ADS)
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Oxidation of GaN: An ab initio thermodynamic approach
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Walsh, Aron
2013-10-01
GaN is a wide-band-gap semiconductor used in high-efficiency light-emitting diodes and solar cells. The solid is produced industrially at high chemical purities by deposition from a vapor phase, and oxygen may be included at this stage. Oxidation represents a potential path for tuning its properties without introducing more exotic elements or extreme processing conditions. In this work, ab initio computational methods are used to examine the energy potentials and electronic properties of different extents of oxidation in GaN. Solid-state vibrational properties of Ga, GaN, Ga2O3, and a single substitutional oxygen defect have been studied using the harmonic approximation with supercells. A thermodynamic model is outlined which combines the results of ab initio calculations with data from experimental literature. This model allows free energies to be predicted for arbitrary reaction conditions within a wide process envelope. It is shown that complete oxidation is favorable for all industrially relevant conditions, while the formation of defects can be opposed by the use of high temperatures and a high N2:O2 ratio.
An algorithmic approach for clinical management of chronic spinal pain.
Manchikanti, Laxmaiah; Helm, Standiford; Singh, Vijay; Benyamin, Ramsin M; Datta, Sukdeb; Hayek, Salim M; Fellows, Bert; Boswell, Mark V
2009-01-01
Interventional pain management, and the interventional techniques which are an integral part of that specialty, are subject to widely varying definitions and practices. How interventional techniques are applied by various specialties is highly variable, even for the most common procedures and conditions. At the same time, many payors, publications, and guidelines are showing increasing interest in the performance and costs of interventional techniques. There is a lack of consensus among interventional pain management specialists with regards to how to diagnose and manage spinal pain and the type and frequency of spinal interventional techniques which should be utilized to treat spinal pain. Therefore, an algorithmic approach is proposed, providing a step-by-step procedure for managing chronic spinal pain patients based upon evidence-based guidelines. The algorithmic approach is developed based on the best available evidence regarding the epidemiology of various identifiable sources of chronic spinal pain. Such an approach to spinal pain includes an appropriate history, examination, and medical decision making in the management of low back pain, neck pain and thoracic pain. This algorithm also provides diagnostic and therapeutic approaches to clinical management utilizing case examples of cervical, lumbar, and thoracic spinal pain. An algorithm for investigating chronic low back pain without disc herniation commences with a clinical question, examination and imaging findings. If there is evidence of radiculitis, spinal stenosis, or other demonstrable causes resulting in radiculitis, one may proceed with diagnostic or therapeutic epidural injections. In the algorithmic approach, facet joints are entertained first in the algorithm because of their commonality as a source of chronic low back pain followed by sacroiliac joint blocks if indicated and provocation discography as the last step. Based on the literature, in the United States, in patients without disc
Stall Recovery Guidance Algorithms Based on Constrained Control Approaches
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana
2016-01-01
Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2011-12-22
We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
NASA Astrophysics Data System (ADS)
Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard
2015-01-01
In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.
NASA Astrophysics Data System (ADS)
Aleardi, Mattia
2015-06-01
Predicting missing log data is a useful capability for geophysicists. Geophysical measurements in boreholes are frequently affected by gaps in the recording of one or more logs. In particular, sonic and shear sonic logs are often recorded over limited intervals along the well path, but the information these logs contain is crucial for many geophysical applications. Estimating missing log intervals from a set of recorded logs is therefore of great interest. In this work, I propose to estimate the data in missing parts of velocity logs using a genetic algorithm (GA) optimisation and I demonstrate that this method is capable of extracting linear or exponential relations that link the velocity to other available logs. The technique was tested on different sets of logs (gamma ray, resistivity, density, neutron, sonic and shear sonic) from three wells drilled in different geological settings and through different lithologies (sedimentary and intrusive rocks). The effectiveness of this methodology is demonstrated by a series of blind tests and by evaluating the correlation coefficients between the true versus predicted velocity values. The combination of GA optimisation with a Gibbs sampler (GS) and subsequent Monte Carlo simulations allows the uncertainties in the final predicted velocities to be reliably quantified. The GA method is also compared with the neural networks (NN) approach and classical multilinear regression. The comparisons show that the GA, NN and multilinear methods provide velocity estimates with the same predictive capability when the relation between the input logs and the seismic velocity is approximately linear. The GA and NN approaches are more robust when the relations are non-linear. However, in all cases, the main advantages of the GA optimisation procedure over the NN approach is that it directly provides an interpretable and simple equation that relates the input and predicted logs. Moreover, the GA method is not affected by the disadvantages
NASA Astrophysics Data System (ADS)
Konak, Abdullah
2014-01-01
This article presents a network design problem with relays considering the two-edge network connectivity. The problem arises in telecommunications and logistic networks where a constraint is imposed on the distance that a commodity can travel on a route without being processed by a relay, and the survivability of the network is critical in case of a component failure. The network design problem involves selecting two-edge disjoint paths between source and destination node pairs and determining the location of relays to minimize the network design cost. The formulated problem is solved by a hybrid approach of a genetic algorithm (GA) and a Lagrangian heuristic such that the GA searches for two-edge disjoint paths for each commodity, and the Lagrangian heuristic is used to determine relays on these paths. The performance of the proposed hybrid approach is compared to the previous approaches from the literature, with promising results.
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
An ab initio-based approach to the stability of GaN(0 0 0 1) surfaces under Ga-rich conditions
NASA Astrophysics Data System (ADS)
Ito, Tomonori; Akiyama, Toru; Nakamura, Kohji
2009-05-01
Structural stability of GaN(0 0 0 1) under Ga-rich conditions is systematically investigated by using our ab initio-based approach. The surface phase diagram for GaN(0 0 0 1) including (2×2) and pseudo-(1×1) is obtained as functions of temperature and Ga beam equivalent pressure by comparing chemical potentials of Ga atom in the gas phase with that on the surface. The calculated results reveal that the pseudo-(1×1) appearing below 684-973 K changes its structure to the (2×2) with Ga adatom at higher temperatures beyond 767-1078 K via the newly found (1×1) with two adlayers of Ga. These results are consistent with the stable temperature range of both the pseudo-(1×1) and (2×2) with Ga adatom obtained experimentally. Furthermore, it should be noted that the structure with another coverage of Ga adatoms between the (1×1) and (2×2)-Ga does not appear as a stable structure of GaN(0 0 0 1). Furthermore, ghost island formation observed by scanning tunneling microscopy is discussed on the basis of the phase diagram.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling. PMID:18255838
NASA Astrophysics Data System (ADS)
Song, Kaishan; Li, Lin; Li, Shuai; Tedesco, Lenore; Hall, Bob; Li, Zuchuan
2012-08-01
Eagle Creek, Morse and Geist reservoirs, drinking water supply sources for the Indianapolis, Indiana, USA metropolitan region, are experiencing nuisance cyanobacterial blooms. Hyperspectral remote sensing has been proven to be an effective tool for phycocyanin (C-PC) concentration retrieval, a proxy pigment unique to cyanobacteria in freshwater ecosystems. An adaptive model based on genetic algorithm and partial least squares (GA-PLS), together with three-band algorithm (TBA) and other band ratio algorithms were applied to hyperspectral data acquired from in situ (ASD spectrometer) and airborne (AISA sensor) platforms. The results indicated that GA-PLS achieved high correlation between measured and estimated C-PC for GR (RMSE = 16.3 μg/L, RMSE% = 18.2; range (R): 2.6-185.1 μg/L), MR (RMSE = 8.7 μg/L, RMSE% = 15.6; R: 3.3-371.0 μg/L) and ECR (RMSE = 19.3 μg/L, RMSE% = 26.4; R: 0.7-245.0 μg/L) for the in situ datasets. TBA also performed well compared to other band ratio algorithms due to its optimal band tuning process and the reduction of backscattering effects through the third band. GA-PLS (GR: RMSE = 24.1 μg/L, RMSE% = 25.2, R: 25.2-185.1 μg/L; MR: RMSE = 15.7 μg/L, RMSE% = 37.4, R: 2.0-135.1 μg/L) and TBA (GR: RMSE = 28.3 μg/L, RMSE% = 30.1; MR: RMSE = 17.7 μg/L, RMSE% = 41.9) methods results in somewhat lower accuracy using AISA imagery data, which is likely due to atmospheric correction or radiometric resolution. GA-PLS (TBA) obtained an RMSE of 24.82 μg/L (35.8 μg/L), and RMSE% of 31.24 (43.5) between measured and estimated C-PC for aggregated datasets. C-PC maps were generated through GA-PLS using AISA imagery data. The C-PC concentration had an average value of 67.31 ± 44.23 μg/L in MR with a large range of concentration, while the GR had a higher average value 103.17 ± 33.45 μg/L.
A genetic algorithm approach for assessing soil liquefaction potential based on reliability method
NASA Astrophysics Data System (ADS)
Bagheripour, M. H.; Shooshpasha, I.; Afzalirad, M.
2012-02-01
Deterministic approaches are unable to account for the variations in soil's strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction ( P L ), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates P L and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.
NASA Astrophysics Data System (ADS)
White, Ronald P.; Mayne, Howard R.
2000-05-01
An annealing schedule, T(t), is the temperature as function of time whose goal is to bring a system from some initial low-order state to a final high-order state. We use the probability in the lowest energy level as the order parameter, so that an ideally annealed system would have all its population in its ground-state. We consider a model system comprised of discrete energy levels separated by activation barriers. We have carried out annealing calculations on this system for a range of system parameters. In particular, we considered the schedule as a function of the energy level spacing, of the height of the activation barriers, and, in some cases, as a function of degeneracies of the levels. For a given set of physical parameters, and maximum available time, tm, we were able to obtain the optimal schedule by using a genetic algorithm (GA) approach. For the two-level system, analytic solutions are available, and were compared with the GA-optimized results. The agreement was essentially exact. We were able to identify systematic behaviors of the schedules and trends in final probabilities as a function of parameters. We have also carried out Metropolis Monte Carlo (MMC) calculations on simple potential energy functions using the optimal schedules available from the model calculations. Agreement between the model and MMC calculations was excellent.
NASA Astrophysics Data System (ADS)
Jamshidi, Saeid; Boozarjomehry, Ramin Bozorgmehry; Pishvaie, Mahmoud Reza
2009-10-01
In pore network modeling, the void space of a rock sample is represented at the microscopic scale by a network of pores connected by throats. Construction of a reasonable representation of the geometry and topology of the pore space will lead to a reliable prediction of the properties of porous media. Recently, the theory of multi-cellular growth (or L-systems) has been used as a flexible tool for generation of pore network models which do not require any special information such as 2D SEM or 3D pore space images. In general, the networks generated by this method are irregular pore network models which are inherently closer to the complicated nature of the porous media rather than regular lattice networks. In this approach, the construction process is controlled only by the production rules that govern the development process of the network. In this study, genetic algorithm has been used to obtain the optimum values of the uncertain parameters of these production rules to build an appropriate irregular lattice network capable of the prediction of both static and hydraulic information of the target porous medium.
Algorithmic approaches to protein-protein interaction site prediction.
Aumentado-Armstrong, Tristan T; Istrate, Bogdan; Murgita, Robert A
2015-01-01
Interaction sites on protein surfaces mediate virtually all biological activities, and their identification holds promise for disease treatment and drug design. Novel algorithmic approaches for the prediction of these sites have been produced at a rapid rate, and the field has seen significant advancement over the past decade. However, the most current methods have not yet been reviewed in a systematic and comprehensive fashion. Herein, we describe the intricacies of the biological theory, datasets, and features required for modern protein-protein interaction site (PPIS) prediction, and present an integrative analysis of the state-of-the-art algorithms and their performance. First, the major sources of data used by predictors are reviewed, including training sets, evaluation sets, and methods for their procurement. Then, the features employed and their importance in the biological characterization of PPISs are explored. This is followed by a discussion of the methodologies adopted in contemporary prediction programs, as well as their relative performance on the datasets most recently used for evaluation. In addition, the potential utility that PPIS identification holds for rational drug design, hotspot prediction, and computational molecular docking is described. Finally, an analysis of the most promising areas for future development of the field is presented. PMID:25713596
Approach to programming multiprocessing algorithms on the Denelcor HEP
Lusk, E.L.; Overbeek, R.A.
1983-12-01
In the process of learning how to write code for the Denelcor HEP, we have developed an approach that others may well find useful. We believe that the basic synchronization primitives of the HEP (i.e., asynchronous variables), along with the prototypical patterns for their use given in the HEP FORTRAN 77 User's Guide, form too low-level a conceptual basis for the formulation of multiprocessing algorithms. We advocate the use of monitors, which can be easily implemented using the HEP primitives. Attempts to solve substantial problems without introducing higher-level constructs such as monitors can produce code that is unreliable, unintelligible, and restricted to the specific dialect of FORTRAN currently supported on the HEP. Our experience leads us to believe that solutions which are both clear and efficient can be formulated using monitors.
A genetic algorithmic approach to antenna null-steering using a cluster computer.
NASA Astrophysics Data System (ADS)
Recine, Greg; Cui, Hong-Liang
2001-06-01
We apply a genetic algorithm (GA) to the problem of electronically steering the maximums and nulls of an antenna array to desired positions (null toward enemy listener/jammer, max toward friendly listener/transmitter). The antenna pattern itself is computed using NEC2 which is called by the main GA program. Since a GA naturally lends itself to parallelization, this simulation was applied to our new twin 64-node cluster computers (Gemini). Design issues and uses of the Gemini cluster in our group are also discussed.
First-principles approach to investigate toroidal property of magnetoelectric multiferroic GaFeO3
NASA Astrophysics Data System (ADS)
Nie, Yung-mau
2016-01-01
A first-principles approach incorporating the concept of toroidal moments as a measure of the spin vortex is proposed and applied to simulate the toroidization of magnetoelectric multiferroic GaFeO3. The nature of space-inversion and time-reversal violations of ferrotoroidics is reproduced in the simulated magnetic structure of GaFeO3. For undoped GaFeO3, a toroidal moment of -22.38 μB Å per unit cell was obtained, which is the best theoretical estimate till date. Guided by the spin vortex free-energy minimization perturbed by an externally applied field, it was discovered that the minority spin markedly biases the whole toroidization. In summary, this approach not only calculates the toroidal moment but provides a way to understand the toroidal nature of magnetoelectric multiferroics.
Investigation of new approaches for InGaN growth with high indium content for CPV application
Arif, Muhammad; Salvestrini, Jean Paul; Sundaram, Suresh; Streque, Jérémy; Gmili, Youssef El; Puybaret, Renaud; Voss, Paul L.; Belahsene, Sofiane; Ramdane, Abderahim; Martinez, Anthony; Patriarche, Gilles; Fix, Thomas; Slaoui, Abdelillah; Ougazzaden, Abdallah
2015-09-28
We propose to use two new approaches that may overcome the issues of phase separation and high dislocation density in InGaN-based PIN solar cells. The first approach consists in the growth of a thick multi-layered InGaN/GaN absorber. The periodical insertion of the thin GaN interlayers should absorb the In excess and relieve compressive strain. The InGaN layers need to be thin enough to remain fully strained and without phase separation. The second approach consists in the growth of InGaN nano-structures for the achievement of high In content thick InGaN layers. It allows the elimination of the preexisting dislocations in the underlying template. It also allows strain relaxation of InGaN layers without any dislocations, leading to higher In incorporation and reduced piezo-electric effect. The two approaches lead to structural, morphological, and luminescence properties that are significantly improved when compared to those of thick InGaN layers. Corresponding full PIN structures have been realized by growing a p-type GaN layer on the top the half PIN structures. External quantum efficiency, electro-luminescence, and photo-current characterizations have been carried out on the different structures and reveal an enhancement of the performances of the InGaN PIN PV cells when the thick InGaN layer is replaced by either InGaN/GaN multi-layered or InGaN nanorod layer.
Investigation of new approaches for InGaN growth with high indium content for CPV application
NASA Astrophysics Data System (ADS)
Arif, Muhammad; Sundaram, Suresh; Streque, Jérémy; Gmili, Youssef El; Puybaret, Renaud; Belahsene, Sofiane; Ramdane, Abderahim; Martinez, Anthony; Patriarche, Gilles; Fix, Thomas; Slaoui, Abdelillah; Voss, Paul L.; Salvestrini, Jean Paul; Ougazzaden, Abdallah
2015-09-01
We propose to use two new approaches that may overcome the issues of phase separation and high dislocation density in InGaN-based PIN solar cells. The first approach consists in the growth of a thick multi-layered InGaN/GaN absorber. The periodical insertion of the thin GaN interlayers should absorb the In excess and relieve compressive strain. The InGaN layers need to be thin enough to remain fully strained and without phase separation. The second approach consists in the growth of InGaN nano-structures for the achievement of high In content thick InGaN layers. It allows the elimination of the preexisting dislocations in the underlying template. It also allows strain relaxation of InGaN layers without any dislocations, leading to higher In incorporation and reduced piezo-electric effect. The two approaches lead to structural, morphological, and luminescence properties that are significantly improved when compared to those of thick InGaN layers. Corresponding full PIN structures have been realized by growing a p-type GaN layer on the top the half PIN structures. External quantum efficiency, electro-luminescence, and photo-current characterizations have been carried out on the different structures and reveal an enhancement of the performances of the InGaN PIN PV cells when the thick InGaN layer is replaced by either InGaN/GaN multi-layered or InGaN nanorod layer.
A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface
Glueck, P.R.; Bahrami, K.A.
1995-12-31
The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar array short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Peng, Bin; Liu, Ke-ling; Li, Zhi-min; Wang, Yue-song; Huang, Tu-jiang
2002-06-01
Genetic algorithm (GA) is used in automatic qualitative analysis by a sequential inductively coupled plasma spectrometer (ICP-AES) and a computer program is developed in this paper. No any standard samples are needed, and spectroscopic interferences can be eliminated. All elements and their concentration ranges of an unknown sample can be reported. The replication rate Pr, crossover rate Pc, and mutation rate of the genetic algorithm were adjusted to be 0.6, 0.4 and 0 respectively. The analytical results of GA are in good agreement with the reference values. It indicates that, combined with the intensity information, the GA can be applied to spectroscopic qualitative analysis and expected to become an effective method in qualitative analysis in ICP-AES after further work. PMID:12938334
Approach to complex upper extremity injury: an algorithm.
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-02-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Lymphocytic panniculitis: an algorithmic approach to lymphocytes in subcutaneous tissue.
Shiau, Carolyn J; Abi Daoud, Marie S; Wong, Se Mang; Crawford, Richard I
2015-12-01
The diagnosis of panniculitis is a relatively rare occurrence for many practising pathologists. The smaller subset of lymphocyte-predominant panniculitis is further complicated by the diagnostic consideration of T cell lymphoma involving the subcutaneous tissue, mimicking inflammatory causes of panniculitis. Accurate classification of the panniculitis is crucial to direct clinical management as treatment options may vary from non-medical therapy to immunosuppressive agents to aggressive chemotherapy. Many diseases show significant overlap in clinical and histological features, making the process of determining a specific diagnosis very challenging. However, with an adequate biopsy including skin and deep subcutaneous tissue, a collaborative effort between clinician and pathologist can often lead to a specific diagnosis. This review provides an algorithmic approach to the diagnosis of lymphocyte-predominant panniculitis, including entities of septal-predominant pattern panniculitis (erythema nodosum, deep necrobiosis lipoidica, morphea profunda and sclerosing panniculitis) and lobular-predominant pattern panniculitis (lupus erythematous panniculitis/lupus profundus, subcutaneous panniculitis-like T cell lymphoma, cutaneous γ-δ T cell lymphoma, Borrelia infection and cold panniculitis). PMID:26602413
A set-membership approach to blind channel equalization algorithm
NASA Astrophysics Data System (ADS)
Li, Yue-ming
2013-03-01
The constant modulus algorithm (CMA) has low computational complexity while presenting slow convergence and possible convergence to local minima, the CMA family of algorithms based on affine projection version (AP-CMA) alleviates the speed limitations of the CMA. However, the computational complexity has been a weak point in the implementation of AP-CMA. To reduce the computational complexity of the adaptive filtering algorithm, a new AP-CMA algorithm based on set membership (SM-AP-CMA) is proposed. The new algorithm combines a bounded error specification on the adaptive filter with the concept of data-reusing. Simulations confirmed that the convergence rate of the proposed algorithm is significantly faster; meanwhile, the excess mean square error can be maintained in a relatively low level and a substantial reduction in the number of updates when compared with its conventional counterpart.
NASA Technical Reports Server (NTRS)
Jenkins, Phillip P.; Hepp, Aloysius F.; Power, Michael B.; Macinnes, Andrew N.; Barron, Andrew R.
1993-01-01
A two order-of-magnitude enhancement of photoluminescence intensity relative to untreated GaAs has been observed for GaAs surfaces coated with chemical vapor-deposited GaS. The increase in photoluminescence intensity can be viewed as an effective reduction in surface recombination velocity and/or band bending. The gallium cluster (/t-Bu/GaS)4 was used as a single-source precursor for the deposition of GaS thin films. The cubane core of the structurally-characterized precursor is retained in the deposited film producing a cubic phase. Furthermore, a near-epitaxial growth is observed for the GaS passivating layer. Films were characterized by transmission electron microscopy, X-ray powder diffraction, and X-ray photoelectron and Rutherford backscattering spectroscopies.
A Functional Programming Approach to AI Search Algorithms
ERIC Educational Resources Information Center
Panovics, Janos
2012-01-01
The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution
Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam
2016-01-01
Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors. A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r2, concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774
Flower pollination algorithm: A novel approach for multiobjective optimization
NASA Astrophysics Data System (ADS)
Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi
2014-09-01
Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.
Genetic algorithms approach for the extraction of the polygonal approximation of planar objects
NASA Astrophysics Data System (ADS)
Erives, Hector; Parra-Loera, Ramon
1996-06-01
A new approach to the extraction of the polygonal approximation is presented. The method obtains a smaller set of the important features by means of an evolutionary algorithm. A genetic approach with some heuristics, improves contour approximation search by starting with a parallel search at various points in the contour. The algorithm uses genetic algorithms to encode a polygonal approximation as a chromosome and evolve it to provide a polygonal approximation. Experimental results are provided.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
NASA Astrophysics Data System (ADS)
Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic
2014-05-01
Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
A genetic algorithm approach in interface and surface structure optimization
Zhang, Jian
2010-01-01
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach
NASA Astrophysics Data System (ADS)
Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu
This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-07-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
An Effective GA-Based Scheduling Algorithm for FlexRay Systems
NASA Astrophysics Data System (ADS)
Ding, Shan; Tomiyama, Hiroyuki; Takada, Hiroaki
An advanced communication system, the FlexRay system, has been developed for future automotive applications. It consists of time-triggered clusters, such as drive-by-wire in cars, in order to meet different requirements and constraints between various sensors, processors, and actuators. In this paper, an approach to static scheduling for FlexRay systems is proposed. Our experimental results show that the proposed scheduling method significantly reduces up to 36.3% of the network traffic compared with a past approach.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
A compensatory algorithm for the slow-down effect on constant-time-separation approaches
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
1991-01-01
In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
NASA Technical Reports Server (NTRS)
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach
NASA Astrophysics Data System (ADS)
Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.
2015-12-01
A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.
Random matrix approach to quantum adiabatic evolution algorithms
Boulatov, A.; Smelyanskiy, V.N.
2005-05-15
We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.
A Genetic Algorithm Approach for the TV Self-Promotion Assignment Problem
NASA Astrophysics Data System (ADS)
Pereira, Paulo A.; Fontes, Fernando A. C. C.; Fontes, Dalila B. M. M.
2009-09-01
We report on the development of a Genetic Algorithm (GA), which has been integrated into a Decision Support System to plan the best assignment of the weekly self-promotion space for a TV station. The problem addressed consists on deciding which shows to advertise and when such that the number of viewers, of an intended group or target, is maximized. The GA proposed incorporates a greedy heuristic to find good initial solutions. These solutions, as well as the solutions later obtained through the use of the GA, go then through a repair procedure. This is used with two objectives, which are addressed in turn. Firstly, it checks the solution feasibility and if unfeasible it is fixed by removing some shows. Secondly, it tries to improve the solution by adding some extra shows. Since the problem faced by the commercial TV station is too big and has too many features it cannot be solved exactly. Therefore, in order to test the quality of the solutions provided by the proposed GA we have randomly generated some smaller problem instances. For these problems we have obtained solutions on average within 1% of the optimal solution value.
NASA Astrophysics Data System (ADS)
Anderson, Richard P.
An algorithm for precision approach guidance using GPS and a MicroElectroMechanical Systems/Inertial Navigation System (MEMS/INS) has been developed to meet the Required Navigational Performance (RNP) at a cost that is suitable for General Aviation (GA) applications. This scheme allows for accurate approach guidance (Category I) using Wide Area Augmentation System (WAAS) at locations not served by ILS, MLS or other types of precision landing guidance, thereby greatly expanding the number of useable airports in poor weather. At locations served by a Local Area Augmentation System (LAAS), Category III-like navigation is possible with the novel idea of a Missed Approach Time (MAT) that is similar to a Missed Approach Point (MAP) but not fixed in space. Though certain augmented types of GPS have sufficient precision for approach navigation, its use alone is insufficient to meet RNP due to an inability to monitor loss, degradation or intentional spoofing and meaconing of the GPS signal. A redundant navigation system and a health monitoring system must be added to acquire sufficient reliability, safety and time-to-alert as stated by required navigation performance. An inertial navigation system is the best choice, as it requires no external radio signals and its errors are complementary to GPS. An aiding Kalman filter is used to derive parameters that monitor the correlation between the GPS and MEMS/INS. These approach guidance parameters determines the MAT for a given RNP and provide the pilot or autopilot with proceed/do-not-proceed decision in real time. The enabling technology used to derive the guidance program is a MEMS gyroscope and accelerometer package in conjunction with a single-antenna pseudo-attitude algorithm. To be viable for most GA applications, the hardware must be reasonably priced. The MEMS gyros allows for the first cost-effective INS package to be developed. With lower cost, however, comes higher drift rates and a more dependence on GPS aiding. In
Effective and efficient optics inspection approach using machine learning algorithms
Abdulla, G; Kegelmeyer, L; Liao, Z; Carr, W
2010-11-02
The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is 'truthed' or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called 'Avatar Tools' is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.
Effective and efficient optics inspection approach using machine learning algorithms
NASA Astrophysics Data System (ADS)
Abdulla, Ghaleb M.; Kegelmeyer, Laura Mascio; Liao, Zhi M.; Carr, Wren
2010-11-01
The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is "truthed" or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called "Avatar Tools" is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
Scheduling language and algorithm development study. Appendix: Study approach and activity summary
NASA Technical Reports Server (NTRS)
1974-01-01
The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.
A Genetic Algorithm Variational Approach to Data Assimilation and Application to Volcanic Emissions
NASA Astrophysics Data System (ADS)
Schmehl, Kerrie J.; Haupt, Sue Ellen; Pavolonis, Michael J.
2012-03-01
Variational data assimilation methods optimize the match between an observed and a predicted field. These methods normally require information on error variances of both the analysis and the observations, which are sometimes difficult to obtain for transport and dispersion problems. Here, the variational problem is set up as a minimization problem that directly minimizes the root mean squared error of the difference between the observations and the prediction. In the context of atmospheric transport and dispersion, the solution of this optimization problem requires a robust technique. A genetic algorithm (GA) is used here for that solution, forming the GA-Variational (GA-Var) technique. The philosophy and formulation of the technique is described here. An advantage of the technique includes that it does not require observation or analysis error covariances nor information about any variables that are not directly assimilated. It can be employed in the context of either a forward assimilation problem or used to retrieve unknown source or meteorological information by solving the inverse problem. The details of the method are reviewed. As an example application, GA-Var is demonstrated for predicting the plume from a volcanic eruption. First the technique is employed to retrieve the unknown emission rate and the steering winds of the volcanic plume. Then that information is assimilated into a forward prediction of its transport and dispersion. Concentration data are derived from satellite data to determine the observed ash concentrations. A case study is made of the March 2009 eruption of Mount Redoubt in Alaska. The GA-Var technique is able to determine a wind speed and direction that matches the observations well and a reasonable emission rate.
Obese and Overweight Children and Adolescents: An Algorithmic Clinical Approach
Khosravi, Shahrzad; Borna, Sima
2013-01-01
Obesity in children and adolescents is a hot issue throughout the world. Numerous complications are related to childhood obesity, such as cardiovascular disease, diabetes, insulin resistance and psychological problems. Therefore, identification and treatment of this problem have an important role in the health system. In this clinical approach, we have provided a general overview of the assessment and management of obesity in children and adolescents, including definitions, history-taking, physical examinations, and laboratory testing for general practitioners and pediatricians. Furthermore, conventional therapies (physical activity, eating habits and behavioral modification) and non-conventional treatments (drugs and surgery options) have been discussed. PMID:24910738
Impact ionization in GaAs: A screened exchange density-functional approach
Picozzi, S.; Asahi, R.; Geller, C. B.; Continenza, A.; Freeman, A. J.
2001-08-13
Results are presented of a fully ab initio calculation of impact ionization rates in GaAs within the density functional theory framework, using a screened-exchange formalism and the highly precise all-electron full-potential linearized augmented plane wave method. The calculated impact ionization rates show a marked orientation dependence in k space, indicating the strong restrictions imposed by the conservation of energy and momentum. This anisotropy diminishes as the impacting electron energy increases. A Keldysh type fit performed on the energy-dependent rate shows a rather soft edge and a threshold energy greater than the direct band gap. The consistency with available Monte Carlo and empirical pseudopotential calculations shows the reliability of our approach and paves the way to ab initio calculations of pair production rates in new and more complex materials.
Forest Height Retrieval Algorithm Using a Complex Visibility Function Approach
NASA Astrophysics Data System (ADS)
Chu, T.; Zebker, H. A.
2011-12-01
Vegetation structure and biomass on earth's terrestrial surface are critical parameters that influences global carbon cycle, habitat, climate, and resources of economic value. Space-borne and air-borne remote sensing instruments are the most practical means of obtaining information such as tree height and biomass on a large scale. SAR (Synthetic aperture radars) especially InSAR (Interferometric SAR) has been utilized in the recent years to quantify vegetation parameters such as height and biomass. However methods used to quantify global vegetation has yet to produce accurate results. It is the goal of this study to develop a signal-processing algorithm through simulation to determine vegetation heights that would lead to accurate height and biomass retrievals. A standard SAR image represents a projection of the 3D distributed backscatter onto a 2D plane. InSAR is capable of determining topography or the height of vegetation. Vegetation height is determined from the mean scattering phase center of all scatterers within a resolution cell. InSAR is capable of generating a 3D height surface, but the distribution of scatters in height is under-determined and cannot be resolved by a single-baseline measurement. One interferogram therefore is insufficient to uniquely determine vertical characteristics of even a simple 3D forest. An aperture synthesis technique in the height or vertical dimension would enable improved resolution capability to distinguish scatterers of different location in the vertical dimension. Repeat pass observations allow us differential interferometry to populate the frequency domain from which we can use the Fourier transform relation to get to the brightness or backscatter domain. Ryle and Hewish first introduced this technique of aperture synthesis in the 1960's for large radio telescope arrays. This technique would allow us to focus the antenna beam pattern in the vertical direction and increase vertical resolving power. It enable us to
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
Genetic algorithm based approach to optimize phenotypical traits of virtual rice.
Ding, Weilong; Xu, Lifeng; Wei, Yang; Wu, Fuli; Zhu, Defeng; Zhang, Yuping; Max, Nelson
2016-08-21
How to select and combine good traits of rice to get high-production individuals is one of the key points in developing crop ideotype cultivation technologies. Existing cultivation methods for producing ideal plants, such as field trials and crop modeling, have some limits. In this paper, we propose a method based on a genetic algorithm (GA) and a functional-structural plant model (FSPM) to optimize plant types of virtual rice by dynamically adjusting phenotypical traits. In this algorithm, phenotypical traits such as leaf angles, plant heights, the maximum number of tiller, and the angle of tiller are considered as input parameters of our virtual rice model. We evaluate the photosynthetic output as a function of these parameters, and optimized them using a GA. This method has been implemented on GroIMP using the modeling language XL (eXtended L-System) and RGG (Relational Growth Grammar). A double haploid population of rice is adopted as test material in a case study. Our experimental results show that our method can not only optimize the parameters of rice plant type and increase the amount of light absorption, but can also significantly increase crop yield. PMID:27179460
Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling
2013-11-01
Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content. PMID:24564148
Zhuang, Weibing; Gao, Zhihong; Zhang, Zhen
2013-01-01
Hormones are closely associated with dormancy in deciduous fruit trees, and gibberellins (GAs) are known to be particularly important. In this study, we observed that GA4 treatment led to earlier bud break in Japanese apricot. To understand better the promoting effect of GA4 on the dormancy release of Japanese apricot flower buds, proteomic and transcriptomic approaches were used to analyse the mechanisms of dormancy release following GA4 treatment, based on two-dimensional gel electrophoresis (2-DE) and digital gene expression (DGE) profiling, respectively. More than 600 highly reproducible protein spots (P<0.05) were detected and, following GA4 treatment, 38 protein spots showed more than a 2-fold difference in expression, and 32 protein spots were confidently identified according to the databases. Compared with water treatment, many proteins that were associated with energy metabolism and oxidation–reduction showed significant changes after GA4 treatment, which might promote dormancy release. We observed that genes at the mRNA level associated with energy metabolism and oxidation–reduction also played an important role in this process. Analysis of the functions of the identified proteins and genes and the related metabolic pathways would provide a comprehensive proteomic and transcriptomic view of the coordination of dormancy release after GA4 treatment in Japanese apricot flower buds. PMID:24014872
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction. PMID:15751115
Self-adaptive parameters in genetic algorithms
NASA Astrophysics Data System (ADS)
Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain
2004-04-01
Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.
An approach to select the appropriate image fusion algorithm for night vision systems
NASA Astrophysics Data System (ADS)
Schwan, Gabriele; Scherer-Negenborn, Norbert
2015-10-01
For many years image fusion has been an important subject in the image processing community. The purpose of image fusion is taking over the relevant information from two or several images to construct one result image. In the past many fusion algorithms were developed and published. Some attempts were made to assess the results from several fusion algorithms automatically with the objective of gaining the best suited output for human observers. But it was shown, that such objective machine-assessment does not always correlate with the observer's subjective perception. In this paper a novel approach is presented, which selects the appropriate fusion algorithm to receive the best image enhancement results for human observers. Assessment of the fusion algorithms' results was done based on the local contrasts. Fusion algorithms are used on a representative data set covering different use cases and image contents. These fusion results of selected data are judged subjectively by some human observers. Then the assessment algorithm with the best fit to the visual perception is used to select the best fusion algorithm for comparable scenarios.
An Approach for Assessing RNA-seq Quantification Algorithms in Replication Studies
Wu, Po-Yen; Phan, John H.; Wang, May D.
2016-01-01
One way to gain a more comprehensive picture of the complex function of a cell is to study the transcriptome. A promising technology for studying the transcriptome is RNA sequencing, an application of which is to quantify elements in the transcriptome and to link quantitative observations to biology. Although numerous quantification algorithms are publicly available, no method of systematically assessing these algorithms has been developed. To meet the need for such an assessment, we present an approach that includes (1) simulated and real datasets, (2) three alignment strategies, and (3) six quantification algorithms. Examining the normalized root-mean-square error, the percentage error of the coefficient of variation, and the distribution of the coefficient of variation, we found that quantification algorithms with the input of sequence alignment reported in the transcriptomic coordinate usually performed better in terms of the multiple metrics proposed in this study.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. PMID:26362314
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
ERIC Educational Resources Information Center
Moreno, Julian; Ovalle, Demetrio A.; Vicari, Rosa M.
2012-01-01
Considering that group formation is one of the key processes in collaborative learning, the aim of this paper is to propose a method based on a genetic algorithm approach for achieving inter-homogeneous and intra-heterogeneous groups. The main feature of such a method is that it allows for the consideration of as many student characteristics as…
Zaki, Mohammad Reza; Varshosaz, Jaleh; Fathi, Milad
2015-05-20
Multivariate nature of drug loaded nanospheres manufacturing in term of multiplicity of involved factors makes it a time consuming and expensive process. In this study genetic algorithm (GA) and artificial neural network (ANN), two tools inspired by natural process, were employed to optimize and simulate the manufacturing process of agar nanospheres. The efficiency of GA was evaluated against the response surface methodology (RSM). The studied responses included particle size, poly dispersity index, zeta potential, drug loading and release efficiency. GA predicted greater extremum values for response factors compared to RSM. However, real values showed some deviations from predicted data. Appropriate agreement was found between ANN model predicted and real values for all five response factors with high correlation coefficients. GA was more successful than RSM in optimization and along with ANN were efficient tools in optimizing and modeling the fabrication process of drug loaded in agar nanospheres. PMID:25817674
Semibulk InGaN: A novel approach for thick, single phase, epitaxial InGaN layers grown by MOVPE
NASA Astrophysics Data System (ADS)
Pantzas, K.; El Gmili, Y.; Dickerson, J.; Gautier, S.; Largeau, L.; Mauguin, O.; Patriarche, G.; Suresh, S.; Moudakir, T.; Bishop, C.; Ahaitouf, A.; Rivera, T.; Tanguy, C.; Voss, P. L.; Ougazzaden, A.
2013-05-01
In this paper we demonstrate a solution to systematically obtain thick, single phase InGaN epilayers by MOVPE. The solution consists in periodically inserting ultra-thin GaN interlayers during InGaN growth. Measurements by HAADF-STEM, X-ray diffraction, cathodoluminescence and photoluminescence demonstrate the effective suppression of the three-dimensional sublayer that is shown to spontaneously form in control InGaN epilayers grown without this method. Simulation predicts that tunneling through the GaN barriers is efficient and that carrier transport through this semi-bulk InGaN/GaN structure is similar to that of bulk InGaN. Such structures may be useful for improving the efficiency of InGaN solar cells by allowing thicker, higher quality InGaN absorption layers.
NASA Technical Reports Server (NTRS)
Li, C.-J.; Sun, Q.; Lagowski, J.; Gatos, H. C.
1985-01-01
The microscale characterization of electronic defects in (SI) GaAs has been a challenging issue in connection with materials problems encountered in GaAs IC technology. The main obstacle which limits the applicability of high resolution electron beam methods such as Electron Beam-Induced Current (EBIC) and cathodoluminescence (CL) is the low concentration of free carriers in semiinsulating (SI) GaAs. The present paper provides a new photo-EBIC characterization approach which combines the spectroscopic advantages of optical methods with the high spatial resolution and scanning capability of EBIC. A scanning electron microscope modified for electronic characterization studies is shown schematically. The instrument can operate in the standard SEM mode, in the EBIC modes (including photo-EBIC and thermally stimulated EBIC /TS-EBIC/), and in the cathodo-luminescence (CL) and scanning modes. Attention is given to the use of CL, Photo-EBIC, and TS-EBIC techniques.
NASA Astrophysics Data System (ADS)
Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.
1998-06-01
A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.
Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration
NASA Astrophysics Data System (ADS)
Shafii, M.; de Smedt, F.
2009-04-01
Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.
Yanmaz, Ersin; Sarıpınar, Emin; Şahin, Kader; Geçen, Nazmiye; Çopur, Fatih
2011-04-01
4D-QSAR studies were performed on a series of 87 penicillin analogues using the electron conformational-genetic algorithm (EC-GA) method. In this EC-based method, each conformation of the molecular system is described by a matrix (ECMC) with both electron structural parameters and interatomic distances as matrix elements. Multiple comparisons of these matrices within given tolerances for high active and low active penicillin compounds allow one to separate a smaller number of matrix elements (ECSA) which represent the pharmacophore groups. The effect of conformations was investigated building model 1 and 2 based on ensemble of conformers and single conformer, respectively. GA was used to select the most important descriptors and to predict the theoretical activity of the training (74 compounds) and test (13 compounds, commercial penicillins) sets. The model 1 for training and test sets obtained by optimum 12 parameters gave more satisfactory results (R(training)(2)=0.861, SE(training)=0.044, R(test)(2)=0.892, SE(test)=0.099, q(2)=0.702, q(ext1)(2)=0.777 and q(ext2)(2)=0.733) than model 2 (R(training)(2)=0.774, SE(training)=0.056, R(test)(2)=0.840, SE(test)=0.121, q(2)=0.514, q(ext1)(2)=0.641 and q(ext2)(2)=0.570). To estimate the individual influence of each of the molecular descriptors on biological activity, the E statistics technique was applied to the derived EC-GA model. PMID:21419636
One-qubit quantum gates in a circular graphene quantum dot: genetic algorithm approach
NASA Astrophysics Data System (ADS)
Amparán, Gibrán; Rojas, Fernando; Pérez-Garrido, Antonio
2013-05-01
The aim of this work was to design and control, using genetic algorithm (GA) for parameter optimization, one-charge-qubit quantum logic gates σ x, σ y, and σ z, using two bound states as a qubit space, of circular graphene quantum dots in a homogeneous magnetic field. The method employed for the proposed gate implementation is through the quantum dynamic control of the qubit subspace with an oscillating electric field and an onsite (inside the quantum dot) gate voltage pulse with amplitude and time width modulation which introduce relative phases and transitions between states. Our results show that we can obtain values of fitness or gate fidelity close to 1, avoiding the leakage probability to higher states. The system evolution, for the gate operation, is presented with the dynamics of the probability density, as well as a visualization of the current of the pseudospin, characteristic of a graphene structure. Therefore, we conclude that is possible to use the states of the graphene quantum dot (selecting the dot size and magnetic field) to design and control the qubit subspace, with these two time-dependent interactions, to obtain the optimal parameters for a good gate fidelity using GA.
A DNA-based algorithm for minimizing decision rules: a rough sets approach.
Kim, Ikno; Chu, Yu-Yi; Watada, Junzo; Wu, Jui-Yu; Pedrycz, Witold
2011-09-01
Rough sets are often exploited for data reduction and classification. While they are conceptually appealing, the techniques used with rough sets can be computationally demanding. To address this obstacle, the objective of this study is to investigate the use of DNA molecules and associated techniques as an optimization vehicle to support algorithms of rough sets. In particular, we develop a DNA-based algorithm to derive decision rules of minimal length. This new approach can be of value when dealing with a large number of objects and their attributes, in which case the complexity of rough-sets-based methods is NP-hard. The proposed algorithm shows how the essential components involved in the minimization of decision rules in data processing can be realized. PMID:22020105
NASA Astrophysics Data System (ADS)
Reyes-Gómez, E.; Perdomo-Leiva, C. A.; Oliveira, L. E.; de Dios-Leyva, M.
1998-04-01
A theoretical resonant-tunnelling approach is used in a detailed study of the electronic and transmission properties of quasiperiodic Fibonacci GaAs-(Ga,Al)As semiconductor superlattices, under applied electric fields. The theoretical scheme is based upon an exact solution of the corresponding Schroedinger equations in different wells and barriers, through the use of Airy functions, and a transfer-matrix technique. The calculated quasibound resonant energies agree quite well with previous theoretical parameter-based results within a tight-binding scheme, in the particular case of isolated Fibonacci building blocks. Theoretical resonant-tunnelling results for 0953-8984/10/16/009/img6 and 0953-8984/10/16/009/img7 generations of the quasiperiodic Fibonacci superlattice reveal the occurrence of anticrossings of the resonant levels with applied electric fields, together with the conduction- and valence-level wave function localization properties and electric-field-induced migration to specific regions of the semiconductor quasiperiodic heterostructure. Finally, theoretical resonant-tunnelling calculations for the interband transition energies are shown to be in quite good quantitative agreement with previously reported experimental photocurrent measurements.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions.
Wollstadt, Patricia; Meyer, Ulrich; Wibral, Michael
2015-01-01
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm's performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a
Earthquake—explosion discrimination using genetic algorithm-based boosting approach
NASA Astrophysics Data System (ADS)
Orlic, Niksa; Loncaric, Sven
2010-02-01
An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-01-01
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm; as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates. PMID:27101930
NASA Astrophysics Data System (ADS)
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-04-01
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates.
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-01-01
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm; as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates. PMID:27101930
The infection algorithm: an artificial epidemic approach for dense stereo correspondence.
Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne
2006-01-01
We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated. PMID:16953787
Dlouhy, Brian J; Dahdaleh, Nader S; Menezes, Arnold H
2015-04-01
The craniovertebral junction (CVJ), or the craniocervical junction (CCJ) as it is otherwise known, houses the crossroads of the CNS and is composed of the occipital bone that surrounds the foramen magnum, the atlas vertebrae, the axis vertebrae, and their associated ligaments and musculature. The musculoskeletal organization of the CVJ is unique and complex, resulting in a wide range of congenital, developmental, and acquired pathology. The refinements of the transoral approach to the CVJ by the senior author (A.H.M.) in the late 1970s revolutionized the treatment of CVJ pathology. At the same time, a physiological approach to CVJ management was adopted at the University of Iowa Hospitals and Clinics in 1977 based on the stability and motion dynamics of the CVJ and the site of encroachment, incorporating the transoral approach for irreducible ventral CVJ pathology. Since then, approaches and techniques to treat ventral CVJ lesions have evolved. In the last 40 years at University of Iowa Hospitals and Clinics, multiple approaches to the CVJ have evolved and a better understanding of CVJ pathology has been established. In addition, new reduction strategies that have diminished the need to perform ventral decompressive approaches have been developed and implemented. In this era of surgical subspecialization, to properly treat complex CVJ pathology, the CVJ specialist must be trained in skull base transoral and endoscopic endonasal approaches, pediatric and adult CVJ spine surgery, and must understand and be able to treat the complex CSF dynamics present in CVJ pathology to provide the appropriate, optimal, and tailored treatment strategy for each individual patient, both child and adult. This is a comprehensive review of the history and evolution of the transoral approaches, extended transoral approaches, endoscopie assisted transoral approaches, endoscopie endonasal approaches, and CVJ reduction strategies. Incorporating these advancements, the authors update the
NASA Astrophysics Data System (ADS)
Abdulsattar, Mudar Ahmed
2016-05-01
Wurtzite nanocrystals of gallium nitride are approached using wurtzoid molecular building blocks. Structural and vibrational properties are investigated for both bare and hydrogen passivated GaN molecules and small nanocrystals. Wurtzoids are bundles of capped (3, 0) nanotubes that form the wurtzite phase when they reach nanocrystal or bulk sizes. Results show that experimental bulk gap is generally confined between bare and H passivated wurtzoids. Structural parameters such as bond lengths and bond angles are in good agreement with experimental bulk values. Results of longitudinal optical (LO) vibrational frequencies of present molecules are red shifted with respect to experimental bulk in agreement with previous studies for other materials. Presently modeled GaN wurtzite nanocrystals and molecules are found suitable for the description of hydrogen sensing in ambient conditions in agreement with experimental findings. N sites in GaN wurtzoid are found responsible for the detection of hydrogen molecules. The Ga sites are found to be either oxidized or permanently connected via van der Waals' forces to nitrogen or hydrogen molecules.
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
NASA Astrophysics Data System (ADS)
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2012-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll a (Chl) concentrations in the global ocean for Chl ≤ 0.25 mg m-3(˜78% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote-sensing reflectance (Rrs, sr-1) in the green and a reference formed linearly between Rrsin the blue and red. For low-Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band ratios and Chl, which was further validated using global data collected concurrently by ship-borne and Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua instruments. Model simulations showed that for low-Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient and performed similarly for different relative contributions of nonphytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and better consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over Medium-Resolution Imaging Spectrometer and Coastal Zone Color Scanner data indicate that the new approach should be generally applicable to all past, current, and future ocean color instruments.
Jin, Cong; Jin, Shu-Wei
2016-06-01
A number of different gene selection approaches based on gene expression profiles (GEP) have been developed for tumour classification. A gene selection approach selects the most informative genes from the whole gene space, which is an important process for tumour classification using GEP. This study presents an improved swarm intelligent optimisation algorithm to select genes for maintaining the diversity of the population. The most essential characteristic of the proposed approach is that it can automatically determine the number of the selected genes. On the basis of the gene selection, the authors construct a variety of the tumour classifiers, including the ensemble classifiers. Four gene datasets are used to evaluate the performance of the proposed approach. The experimental results confirm that the proposed classifiers for tumour classification are indeed effective. PMID:27187989
NASA Astrophysics Data System (ADS)
Rocha, M. C.; Saraiva, J. T.
2012-10-01
The basic objective of Transmission Expansion Planning (TEP) is to schedule a number of transmission projects along an extended planning horizon minimizing the network construction and operational costs while satisfying the requirement of delivering power safely and reliably to load centres along the horizon. This principle is quite simple, but the complexity of the problem and the impact on society transforms TEP on a challenging issue. This paper describes a new approach to solve the dynamic TEP problem, based on an improved discrete integer version of the Evolutionary Particle Swarm Optimization (EPSO) meta-heuristic algorithm. The paper includes sections describing in detail the EPSO enhanced approach, the mathematical formulation of the TEP problem, including the objective function and the constraints, and a section devoted to the application of the developed approach to this problem. Finally, the use of the developed approach is illustrated using a case study based on the IEEE 24 bus 38 branch test system.
A novel ROC approach for performance evaluation of target detection algorithms
NASA Astrophysics Data System (ADS)
Ganapathy, Priya; Skipper, Julie A.
2007-04-01
Receiver operator characteristic (ROC) analysis is an emerging automated target recognition system performance assessment tool. The ROC metric, area under the curve (AUC), is a universally accepted measure of classifying accuracy. In the presented approach, the detection algorithm output, i.e., a response plane (RP), must consist of grayscale values wherein a maximum value (e.g. 255) corresponds to highest probability of target locations. AUC computation involves the comparison of the RP and the ground truth to classify RP pixels as true positives (TP), true negatives (TN), false positives (FP), or false negatives (FN). Ideally, the background and all objects other than targets are TN. Historically, evaluation methods have excluded the background, and only a few spoof objects likely to be considered as a hit by detection algorithms were a priori demarcated as TN. This can potentially exaggerate the algorithm's performance. Here, a new ROC approach has been developed that divides the entire image into mutually exclusive target (TP) and background (TN) grid squares with adjustable size. Based on the overlap of the thresholded RP with the TP and TN grids, the FN and FP fractions are computed. Variation of the grid square size can bias the ROC results by artificially altering specificity, so an assessment of relative performance under a constant grid square size is adopted in our approach. A pilot study was performed to assess the method's ability to capture RP changes under three different detection algorithm parameter settings on ten images with different backgrounds and target orientations. An ANOVA-based comparison of the AUCs for the three settings showed a significant difference (p<0.001) at 95% confidence interval.
Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset
NASA Technical Reports Server (NTRS)
Ramasso, Emannuel; Saxena, Abhinav
2014-01-01
Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
Metamorphic approach to single quantum dot emission at 1.55 {mu}m on GaAs substrate
Semenova, E. S.; Hostein, R.; Patriarche, G.; Mauguin, O.; Largeau, L.; Robert-Philip, I.; Beveratos, A.; Lemaitre, A.
2008-05-15
We report on the fabrication and the characterization of InAs quantum dots (QDs) embedded in an indium rich In{sub 0.42}Ga{sub 0.58}As metamorphic matrix grown on a GaAs substrate. Growth conditions were chosen so as to minimize the number of threading dislocations and other defects produced during the plastic relaxation. Sharp and bright lines, originating from the emission of a few isolated single quantum dots, were observed in microphotoluminescence around 1.55 {mu}m at 5 K. They exhibit, in particular, a characteristic exciton/biexciton behavior. These QDs could offer an interesting alternative to other approaches as InAs/InP QDs for the realization of single photon emitters at telecom wavelengths.
Tumuluru, J.S.; Sokhansanj, Shahabaddine
2008-12-01
Abstract In the present study, response surface method (RSM) and genetic algorithm (GA) were used to study the effects of process variables like screw speed, rpm (x1), L/D ratio (x2), barrel temperature ( C; x3), and feed mix moisture content (%; x4), on flow rate of biomass during single-screw extrusion cooking. A second-order regression equation was developed for flow rate in terms of the process variables. The significance of the process variables based on Pareto chart indicated that screw speed and feed mix moisture content had the most influence followed by L/D ratio and barrel temperature on the flow rate. RSM analysis indicated that a screw speed>80 rpm, L/D ratio> 12, barrel temperature>80 C, and feed mix moisture content>20% resulted in maximum flow rate. Increase in screw speed and L/D ratio increased the drag flow and also the path of traverse of the feed mix inside the extruder resulting in more shear. The presence of lipids of about 35% in the biomass feed mix might have induced a lubrication effect and has significantly influenced the flow rate. The second-order regression equations were further used as the objective function for optimization using genetic algorithm. A population of 100 and iterations of 100 have successfully led to convergence the optimum. The maximum and minimum flow rates obtained using GA were 13.19 10 7 m3/s (x1=139.08 rpm, x2=15.90, x3=99.56 C, and x4=59.72%) and 0.53 10 7 m3/s (x1=59.65 rpm, x2= 11.93, x3=68.98 C, and x4=20.04%).
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-05-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
Single-shot x-ray phase contrast imaging with an algorithmic approach using spectral detection
NASA Astrophysics Data System (ADS)
Das, Mini; Park, Chan-Soo; Fredette, Nathaniel R.
2016-04-01
X-ray phase contrast imaging has been investigated during the last two decades for potential benefits in soft tissue imaging. Long imaging time, high radiation dose and general measurement complexity involving motion of x-ray optical components have prevented the clinical translation of these methods. In all existing popular phase contrast imaging methods, multiple measurements per projection angle involving motion of optical components are required to achieve quantitatively accurate estimation of absorption, phase and differential phase. Recently we proposed an algorithmic approach to use spectral detection data in a phase contrast imaging setup to obtain absorption, phase and differential phase in a single-step. Our generic approach has been shown via simulations in all three types of phase contrast imaging: propagation, coded aperture and grating interferometry. While other groups have used spectral detector in phase contrast imaging setups, our proposed method is unique in outlining an approach to use this spectral data to simplify phase contrast imaging. In this abstract we show the first experimental proof of our single-shot phase retrieval using a Medipix3 photon counting detector in an edge illumination aperture (also referred to as coded aperture) phase contrast set up as well as for a free space propagation setup. Our preliminary results validate our new transport equation for edge illumination PCI and our spectral phase retrieval algorithm for both PCI methods being investigated. Comparison with simulations also point to excellent performance of Medipix3 built-in charge sharing correction mechanism.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments
Thomas, Brian L.; Crandall, Aaron S.; Cook, Diane J.
2016-01-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care. PMID:27453810
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
A genetic algorithm approach to probing the evolution of self-organized nanostructured systems.
Siepmann, Peter; Martin, Christopher P; Vancea, Ioan; Moriarty, Philip J; Krasnogor, Natalio
2007-07-01
We present a new methodology, based on a combination of genetic algorithms and image morphometry, for matching the outcome of a Monte Carlo simulation to experimental observations of a far-from-equilibrium nanosystem. The Monte Carlo model used simulates a colloidal solution of nanoparticles drying on a solid substrate and has previously been shown to produce patterns very similar to those observed experimentally. Our approach enables the broad parameter space associated with simulated nanoparticle self-organization to be searched effectively for a given experimental target morphology. PMID:17552572
New approach for motion coordination of a mobile manipulator using fuzzy behavioral algorithms
NASA Astrophysics Data System (ADS)
Haeusler, Kurt; Klement, Erich P.; Zeichen, Gerfried
1998-10-01
In this paper a new approach for the coordination of the motion axes of a mobile manipulator based on fuzzy behavioral algorithms and its implementation on a physical demonstrator is presented. The kinematic redundancy of the overall system (consisting of a 7 DOF manipulator and a 3 DOF mobile robot) will be used for autonomous and reactive motion of the mobile manipulator within poorly structured and even dynamically changing surroundings. Sensors around the mobile and along the manipulator will provide the necessary information for navigation purposes and perception of the environment.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Omidi-Kashani, Farzad; Ebrahimzadeh, Mohamad Hossein; Salari, Saman
2014-12-01
Lumbar spondylolysis and spondylolisthesis are common spinal disorders that most of the times are incidental findings or respond favorably to conservative treatment. In a small percentage of the patients, surgical intervention becomes necessary. Because too much attention has been paid to novel surgical techniques and new modern spinal implants, some of fundamental concepts have been forgotten. Identifying that small but important number of patients with lumbar spondylolysis or spondylolisthesis who would really benefit from lumbar surgery is one of those forgotten concepts. In this paper, we have developed an algorithmic approach to determine who is a good candidate for surgery due to lumbar spondylolysis or spondylolisthesis. PMID:25558333
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
A Heuristic Approach Based on Clarke-Wright Algorithm for Open Vehicle Routing Problem
2013-01-01
We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62). PMID:24382948
Ebrahimzadeh, Mohamad Hossein; Salari, Saman
2014-01-01
Lumbar spondylolysis and spondylolisthesis are common spinal disorders that most of the times are incidental findings or respond favorably to conservative treatment. In a small percentage of the patients, surgical intervention becomes necessary. Because too much attention has been paid to novel surgical techniques and new modern spinal implants, some of fundamental concepts have been forgotten. Identifying that small but important number of patients with lumbar spondylolysis or spondylolisthesis who would really benefit from lumbar surgery is one of those forgotten concepts. In this paper, we have developed an algorithmic approach to determine who is a good candidate for surgery due to lumbar spondylolysis or spondylolisthesis. PMID:25558333
Brasier, Martin D; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-21
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions. PMID:25901305
A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni–Mn–Ga
NASA Astrophysics Data System (ADS)
Faran, Eilon; Shilo, Doron
2016-09-01
The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni–Mn–Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni–Mn–Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni–Mn–Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.
A conflict-free, path-level parallelization approach for sequential simulation algorithms
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.
2015-07-01
Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R., Jr.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
Combined mixed approach algorithm for in-line phase-contrast x-ray imaging
De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto
2010-07-15
Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation
An algorithmic strategy for selecting a surgical approach in cervical deformity correction.
Hann, Shannon; Chalouhi, Nohra; Madineni, Ravichandra; Vaccaro, Alexander R; Albert, Todd J; Harrop, James; Heller, Joshua E
2014-05-01
Adult degenerative cervical kyphosis is a debilitating disease that often requires complex surgical management. Young spine surgeons, residents, and fellows are often confused as to which surgical approach to choose due to lack of experience, absence of a systematic method of surgical management, and today's plethora of information regarding surgical techniques. Although surgeons may be able to perform anterior, posterior, or combined (360°) approaches to the cervical spine, many struggle to rationally choose an appropriate approach for deformity correction. The authors introduce an algorithm based on morphology and pathology of adult cervical kyphosis to help the surgeon select the appropriate approach when performing cervical deformity surgery. Cervical deformities are categorized into 5 different prevalent morphological types encountered in clinical settings. A surgical approach tailored to each category/type of deformity is then discussed, with a concrete case illustration provided for each. Preoperative assessment of kyphosis, determination of the goal for surgery, and the complications associated with cervical deformity correction are also summarized. This article's goal is to assist with understanding the big picture for surgical management in cervical spinal deformity. PMID:24785487
NASA Astrophysics Data System (ADS)
Okawa, S.; Yamamoto, H.; Miwa, Y.; Yamada, Y.
2011-07-01
Fluorescence diffuse optical tomography (FDOT) based on the total light approach is developed. The continuous wave light is used for excitation in this system. The reconstruction algorithm is based on the total light approach that reconstructs the absorption coefficients increased by the fluorophore. Additionally we propose noise reduction using the algebraic reconstruction technique (ART) incorporating the truncated singular value decomposition (TSVD). Numerical and phantom experiments show that the developed system successfully reconstructs the fluorophore concentration in the biological media, and the ART with TSVD alleviates the influence of noises. In vivo experiment demonstrated that the developed FDOT system localized the fluorescent agent which was concentrated in the cancer transplanted into a kidney in a mouse.
NASA Astrophysics Data System (ADS)
Shyue, Keh-Ming; Xiao, Feng
2014-07-01
We describe a novel interface-sharpening approach for efficient numerical resolution of a compressible homogeneous two-phase flow governed by a quasi-conservative five-equation model of Allaire et al. (2001) [1]. The algorithm uses a semi-discrete wave propagation method to find approximate solution of this model numerically. In the algorithm, in regions near the interfaces where two different fluid components are present within a cell, the THINC (Tangent of Hyperbola for INterface Capturing) scheme is used as a basis for the reconstruction of a sub-grid discontinuity of volume fractions at each cell edge, and it is complemented by a homogeneous-equilibrium-consistent technique that is derived to ensure a consistent modeling of the other interpolated physical variables in the model. In regions away from the interfaces where the flow is single phase, standard reconstruction scheme such as MUSCL or WENO can be used for obtaining high-order interpolated states. These reconstructions are then used as the initial data for Riemann problems, and the resulting fluctuations form the basis for the spatial discretization. Time integration of the algorithm is done by employing a strong stability-preserving Runge-Kutta method. Numerical results are shown for sample problems with the Mie-Grüneisen equation of state for characterizing the materials of interests in both one and two space dimensions that demonstrate the feasibility of the proposed method for interface-sharpening of compressible two-phase flow. To demonstrate the competitiveness of our approach, we have also included results obtained using the anti-diffusion interface sharpening method.
Soft tissue balancing in varus total knee arthroplasty: an algorithmic approach.
Verdonk, Peter C M; Pernin, Jerome; Pinaroli, Alban; Ait Si Selmi, Tarik; Neyret, Philippe
2009-06-01
We present an algorithmic release approach to the varus knee, including a novel pie crust release technique of the superficial MCL, in 359 total knee arthroplasty patients and report the clinical and radiological outcome. Medio-lateral stability was evaluated as normal in 97% of group 0 (deep MCL), 95% of group 1 (pie crust superficial MCL) and 83% of group 2 (distal superficial MCL). The mean preoperative hip-knee angle was 174.0, 172.1, and 169.5 and was corrected postoperatively to 179.1, 179.2, and 177.6 for groups 0, 1, and 2, respectively. A satisfactory correction in the coronal plane was achieved in 82.9% of all-comers falling within the 180 degrees +/- 3 degrees interval. An algorithmic release approach can be beneficial for soft tissue balancing. In all patients, the deep medial collateral ligament should be released and otseophytes removed. The novel pie crust technique of the superficial MCL is safe, efficient and reliable, provided a medial release of 6-8 mm or less is required. The release of the superficial MCL on the distal tibia is advocated in severe varus knees. Preoperative coronal alignment is an important predictor for the release technique, but should be combined with other parameters such as reducibility of the deformity and the obtained gap asymmetry. PMID:19290507
Vertical and lateral flight optimization algorithm and missed approach cost calculation
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Flight trajectory optimization is being looked as a way of reducing flight costs, fuel burned and emissions generated by the fuel consumption. The objective of this work is to find the optimal trajectory between two points. To find the optimal trajectory, the parameters of weight, cost index, initial coordinates, and meteorological conditions along the route are provided to the algorithm. This algorithm finds the trajectory where the global cost is the most economical. The global cost is a compromise between fuel burned and flight time, this is determined using a cost index that assigns a cost in terms of fuel to the flight time. The optimization is achieved by calculating a candidate optimal cruise trajectory profile from all the combinations available in the aircraft performance database. With this cruise candidate profile, more cruises profiles are calculated taken into account the climb and descend costs. During cruise, step climbs are evaluated to optimize the trajectory. The different trajectories are compared and the most economical one is defined as the optimal vertical navigation profile. From the optimal vertical navigation profile, different lateral routes are tested. Taking advantage of the meteorological influence, the algorithm looks for the lateral navigation trajectory where the global cost is the most economical. That route is then selected as the optimal lateral navigation profile. The meteorological data was obtained from environment Canada. The new way of obtaining data from the grid from environment Canada proposed in this work resulted in an important computation time reduction compared against other methods such as bilinear interpolation. The algorithm developed here was evaluated in two different aircraft: the Lockheed L-1011 and the Sukhoi Russian regional jet. The algorithm was developed in MATLAB, and the validation was performed using Flight-Sim by Presagis and the FMS CMA-9000 by CMC Electronics -- Esterline. At the end of this work a
NASA Astrophysics Data System (ADS)
D'Avezac, Mayeul; Zunger, Alex
2007-03-01
In many problems in molecular and solid state structures one needs to determine the energy-minimizing decoration of sites by different atom-types (i. e.configuration). The sheer size of this configurational space can be horrendous even if the underlying lattice-type is known. The ab-initio total-energy surface for different (relaxed) configurations can often be parameterized by a spin-like Hamiltonian (Cluster-Expansion) with discrete spin -variables denoting the type of atom occupying each site. We compare two search strategies for the energy-minimizing configuration: (i) A discrete-variable genetic-algorithm approach( S. V. Dudiy and A. Zunger, PRL 97, 046401 (2006) ) and (ii) a continuous-variable approach (M. Wang et al, J. Am. Chem. Soc. 128, 3228 (2006) ) where the discrete-spin functional is mapped onto a continuous-spin functional (virtual atoms) and the search is guided by local gradients with respect to each spin. We compare their efficiency at locating the ground-state configurations of fcc Au-Pd Alloy in terms of number of calls to the functional. We show that a GA approach with diversity-enhancing constraints and reciprocal-space mating easily outperforms the VA approach.
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Armañanzas, Rubén; Saeys, Yvan; Inza, Iñaki; García-Torres, Miguel; Bielza, Concha; van de Peer, Yves; Larrañaga, Pedro
2011-01-01
Progress is continuously being made in the quest for stable biomarkers linked to complex diseases. Mass spectrometers are one of the devices for tackling this problem. The data profiles they produce are noisy and unstable. In these profiles, biomarkers are detected as signal regions (peaks), where control and disease samples behave differently. Mass spectrometry (MS) data generally contain a limited number of samples described by a high number of features. In this work, we present a novel class of evolutionary algorithms, estimation of distribution algorithms (EDA), as an efficient peak selector in this MS domain. There is a trade-of f between the reliability of the detected biomarkers and the low number of samples for analysis. For this reason, we introduce a consensus approach, built upon the classical EDA scheme, that improves stability and robustness of the final set of relevant peaks. An entire data workflow is designed to yield unbiased results. Four publicly available MS data sets (two MALDI-TOF and another two SELDI-TOF) are analyzed. The results are compared to the original works, and a new plot (peak frequential plot) for graphically inspecting the relevant peaks is introduced. A complete online supplementary page, which can be found at http://www.sc.ehu.es/ccwbayes/members/ruben/ms, includes extended info and results, in addition to Matlab scripts and references. PMID:21393653
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665
Du, Hubing; Gao, Honghong
2016-08-20
Affected by the height dependent effects, the phase-shifting shadow moiré can only be implemented in an approximate way. In the technique, a fixed phase step around π/2 rad between two adjacent frames is usually introduced by a grating translation in its own plane. So the method is not flexible in some situations. Additionally, because the shadow moiré fringes have a complex intensity distribution, computing the introduced phase shift from the existing arccosine function or arcsine function-based phase shift extraction algorithm always exhibits instability. To solve it, we developed a Gram-Schmidt orthonormalization approach based on a three-frame self-calibration phase-shifting algorithm with equal but unknown phase steps. The proposed method using the arctangent function is fast and can be implemented robustly in many applications. We also do optical experiments to demonstrate the correction of the proposed method by referring to the result of the conventional five-step phase-shifting shadow moiré. The results show the correctness of the proposed method. PMID:27556993
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm.
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
GA-ANFIS Expert System Prototype for Prediction of Dermatological Diseases.
Begic Fazlic, Lejla; Avdagic, Korana; Omanovic, Samir
2015-01-01
This paper presents novel GA-ANFIS expert system prototype for dermatological disease detection by using dermatological features and diagnoses collected in real conditions. Nine dermatological features are used as inputs to classifiers that are based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. After that, they are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validation of the novel GA-ANFIS system approach is performed in MATLAB environment by using validation set of data. Some conclusions concerning the impacts of features on the detection of dermatological diseases were obtained through analysis of the GA-ANFIS. We compared GA-ANFIS and ANFIS results. The results confirmed that the proposed GA-ANFIS model achieved accuracy rates which are higher than the ones we got by ANFIS model. PMID:25991223
Shang, J.S.; Andrienko, D.A.; Huang, P.G.; Surzhikov, S.T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical–physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss–Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
Fabrication of AlGaN/GaN Ω-shaped nanowire fin-shaped FETs by a top-down approach
NASA Astrophysics Data System (ADS)
Im, Ki-Sik; Sindhuri, Vodapally; Jo, Young-Woo; Son, Dong-Hyeok; Lee, Jae-Hoon; Cristoloveanu, Sorin; Lee, Jung-Hee
2015-06-01
An AlGaN/GaN-based Ω-shaped nanowire fin-shaped FET (FinFET) with a fin width of 50 nm was fabricated using tetramethylammonium hydroxide (TMAH)-based lateral wet etching. An atomic layer deposited (ALD) HfO2 side-wall layer served as the etching mask. ALD Al2O3 and TiN layers were used as the gate dielectric and gate metal, respectively. The Ω-shaped gate structure fully depletes the active fin body and almost completely separates the depleted fin from the underlying thick GaN buffer layer, resulting in superior device performance. The top-down processing proposed in this work provides a viable pathway towards gate-all-around devices for III-nitride semiconductors.
Fist Principles Approach to the Magneto Caloric Effect: Application to Ni2MnGa
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Rusanu, Aurelian; Eisenbach, Markus; Brown, Gregory; Evans, Boyd, III
2011-03-01
The magneto-caloric effect (MCE) has potential application in heating and cooling technologies. In this work, we present calculated magnetic structure of a candidate MCE material, Ni 2 MnGa. The magnetic configurations of a 144 atom supercell is first explored using first-principle, the results are then used to fit exchange parameters of a Heisenberg Hamiltonian. The Wang-Landau method is used to calculate the magnetic density of states of the Heisenberg Hamiltonian. Based on this classical estimate, the magnetic density of states is calculated using the Wang Landau method with energies obtained from the first principles method. The Currie temperature and other thermodynamic properties are calculated using the density of states. The relationships between the density of magnetic states and the field induced adiabatic temperature change and isothermal entropy change are discussed. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).
Processing approach towards the formation of thin-film Cu(In,Ga)Se2
Beck, Markus E.; Noufi, Rommel
2003-01-01
A two-stage method of producing thin-films of group IB-IIIA-VIA on a substrate for semiconductor device applications includes a first stage of depositing an amorphous group IB-IIIA-VIA precursor onto an unheated substrate, wherein the precursor contains all of the group IB and group IIIA constituents of the semiconductor thin-film to be produced in the stoichiometric amounts desired for the final product, and a second stage which involves subjecting the precursor to a short thermal treatment at 420.degree. C.-550.degree. C. in a vacuum or under an inert atmosphere to produce a single-phase, group IB-III-VIA film. Preferably the precursor also comprises the group VIA element in the stoichiometric amount desired for the final semiconductor thin-film. The group IB-IIIA-VIA semiconductor films may be, for example, Cu(In,Ga)(Se,S).sub.2 mixed-metal chalcogenides. The resultant supported group IB-IIIA-VIA semiconductor film is suitable for use in photovoltaic applications.
ERIC Educational Resources Information Center
Reese, Debbie Denise; Tabachnick, Barbara G.
2010-01-01
In this paper, the authors summarize a quantitative analysis demonstrating that the CyGaMEs toolset for embedded assessment of learning within instructional games measures growth in conceptual knowledge by quantifying player behavior. CyGaMEs stands for Cyberlearning through GaME-based, Metaphor Enhanced Learning Objects. Some scientists of…
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. PMID:25835791
Diffuse lung disease of infancy: a pattern-based, algorithmic approach to histological diagnosis.
Armes, Jane E; Mifsud, William; Ashworth, Michael
2015-02-01
Diffuse lung disease (DLD) of infancy has multiple aetiologies and the spectrum of disease is substantially different from that seen in older children and adults. In many cases, a specific diagnosis renders a dire prognosis for the infant, with profound management implications. Two recently published series of DLD of infancy, collated from the archives of specialist centres, indicate that the majority of their cases were referred, implying that the majority of biopsies taken for DLD of infancy are first received by less experienced pathologists. The current literature describing DLD of infancy takes a predominantly aetiological approach to classification. We present an algorithmic, histological, pattern-based approach to diagnosis of DLD of infancy, which, with the aid of appropriate multidisciplinary input, including clinical and radiological expertise and ancillary diagnostic studies, may lead to an accurate and useful interim report, with timely exclusion of inappropriate diagnoses. Subsequent referral to a specialist centre for confirmatory diagnosis will be dependent on the individual case and the decision of the multidisciplinary team. PMID:25477529
Diffuse lung disease of infancy: a pattern-based, algorithmic approach to histological diagnosis
Armes, Jane E; Mifsud, William; Ashworth, Michael
2015-01-01
Diffuse lung disease (DLD) of infancy has multiple aetiologies and the spectrum of disease is substantially different from that seen in older children and adults. In many cases, a specific diagnosis renders a dire prognosis for the infant, with profound management implications. Two recently published series of DLD of infancy, collated from the archives of specialist centres, indicate that the majority of their cases were referred, implying that the majority of biopsies taken for DLD of infancy are first received by less experienced pathologists. The current literature describing DLD of infancy takes a predominantly aetiological approach to classification. We present an algorithmic, histological, pattern-based approach to diagnosis of DLD of infancy, which, with the aid of appropriate multidisciplinary input, including clinical and radiological expertise and ancillary diagnostic studies, may lead to an accurate and useful interim report, with timely exclusion of inappropriate diagnoses. Subsequent referral to a specialist centre for confirmatory diagnosis will be dependent on the individual case and the decision of the multidisciplinary team. PMID:25477529
A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
2000-01-01
Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.
Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems
Chen, Hsin-Chu; Warsi, N.A.; Sameh, A.
1996-12-31
In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).
Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.
Verdaguer, Marta; Molinos-Senante, María; Poch, Manel
2016-04-01
Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. PMID:26868846
One-year results of an algorithmic approach to managing failed back surgery syndrome
Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia
2014-01-01
BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573
NASA Astrophysics Data System (ADS)
Mojarab, Masoud; Kossobokov, Vladimir; Memarian, Hossein; Zare, Mehdi
2015-07-01
On 23rd October 2011, an M7.3 earthquake near the Turkish city of Van, killed more than 600 people, injured over 4000, and left about 60,000 homeless. It demolished hundreds of buildings and caused great damages to thousand others in Van, Ercis, Muradiye, and Çaldıran. The earthquake's epicenter is located about 70 km from a preceding M7.3 earthquake that occurred in November 1976 and destroyed several villages near the Turkey-Iran border and killed thousands of people. This study, by means of retrospective application of the M8 algorithm, checks to see if the 2011 Van earthquake could have been predicted. The algorithm is based on pattern recognition of Times of Increased Probability (TIP) of a target earthquake from the transient seismic sequence at lower magnitude ranges in a Circle of Investigation (CI). Specifically, we applied a modified M8 algorithm adjusted to a rather low level of earthquake detection in the region following three different approaches to determine seismic transients. In the first approach, CI centers are distributed on intersections of morphostructural lineaments recognized as prone to magnitude 7 + earthquakes. In the second approach, centers of CIs are distributed on local extremes of the seismic density distribution, and in the third approach, CI centers were distributed uniformly on the nodes of a 1∘×1∘ grid. According to the results of the M8 algorithm application, the 2011 Van earthquake could have been predicted in any of the three approaches. We noted that it is possible to consider the intersection of TIPs instead of their union to improve the certainty of the prediction results. Our study confirms the applicability of a modified version of the M8 algorithm for predicting earthquakes at the Iranian-Turkish plateau, as well as for mitigation of damages in seismic events in which pattern recognition algorithms may play an important role.
Pitschner, H F; Berkowitsch, A
2001-01-01
Symbolic dynamics as a non linear method and computation of the normalized algorithmic complexity (C alpha) was applied to basket-catheter mapping of atrial fibrillation (AF) in the right human atrium. The resulting different degrees of organisation of AF have been compared to conventional classification of Wells. Short time temporal and spatial distribution of the C alpha during AF and effects of propafenone on this distribution have been investigated in 30 patients. C alpha was calculated for a moving window. Generated C alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C alpha distribution was statistically analyzed. Inter-regional C alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C alpha increase in cranio-caudal direction was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity. PMID:11889958
NASA Technical Reports Server (NTRS)
Hoang, TY
1994-01-01
A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).
A novel stochastic optimization algorithm.
Li, B; Jiang, W
2000-01-01
This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742
NASA Astrophysics Data System (ADS)
Della Mora, S.; Boschi, L.; Becker, T. W.; Giardini, D.
2010-12-01
The wavelength spectrum of three-dimensional (3D) heterogeneity naturally reflects the nature of Earth dynamics, and is in its own right an important constraint for geodynamical modeling. The Earth's spectrum has been usually evaluated indirectly, on the basis of previously derived tomographic models. If the geographic distribution of seismic heterogeneities is neglected, however, one can invert global seismic data directly to find the spectrum of the Earth. Inverting for the spectrum is in principle (fewer unknowns) cheaper and robust than inverting for the 3D structure of a planet: this should allow us to constrain planetary structure at smaller scales than by current 3D models. Based on the work of Gudmundsson and coworkers in the early 1990s, we have developed a linear algorithm for surface waves. The spectra we obtain are in qualitative agreement with results from 3D tomography, but the resolving power is generally lower, due to the simplifications required to linearise the ``spectral'' inversion. To overcome this problem, we performed full nonlinear inversions of synthetically generated and real datasets, and compare the obtained spectra with the input and tomographic models respectively. The inversions are calculated on a distributed memory parallel nodes cluster, employing the MPI package. An evolutionary strategy approach is used to explore the parameter space, using the PIKAIA software. The first preliminary results show a resolving power higher than that of linearised inversion. This confirms that the approximations required in the linear formulation affect the solution quality, and suggests that the nonlinear approach might effectively help to constrain the heterogeneity spectrum more robustly than currently possible.
A novel approach for accurate identification of splice junctions based on hybrid algorithms.
Mandal, Indrajit
2015-01-01
The precise prediction of splice junctions as 'exon-intron' or 'intron-exon' boundaries in a given DNA sequence is an important task in Bioinformatics. The main challenge is to determine the splice sites in the coding region. Due to the intrinsic complexity and the uncertainty in gene sequence, the adoption of data mining methods is increasingly becoming popular. There are various methods developed on different strategies in this direction. This article focuses on the construction of new hybrid machine learning ensembles that solve the splice junction task more effectively. A novel supervised feature reduction technique is developed using entropy-based fuzzy rough set theory optimized by greedy hill-climbing algorithm. The average prediction accuracy achieved is above 98% with 95% confidence interval. The performance of the proposed methods is evaluated using various metrics to establish the statistical significance of the results. The experiments are conducted using various schemes with human DNA sequence data. The obtained results are highly promising as compared with the state-of-the-art approaches in literature. PMID:25203504
Sezgin, Billur; Ayhan, Suhan; Tuncer, Serhan; Sencan, Ayse; Aral, Mubin
2012-10-01
Despite appropriate surgical technique and follow-up, flap failures can be encountered for which no valid reason is evident. Current literature states that these unpredictable flap failures can be caused by unknown patient factors, such as undiagnosed hypercoagulability. Our approach and experience utilizing an algorithm to minimize unpredictable failures in microvascular breast reconstruction by predetermining hypercoagulation risk factors in preoperative patients is presented. A prospective assessment of microsurgical breast reconstruction candidates between October 2007 and December 2010 was conducted. Patients were questioned about their tendency toward hypercoagulation. A thrombophilia panel was requested for patients confirming any risk factors. Appropriate surgical planning was conducted according to results of the panel. Of the 60 patients thoroughly questioned about hypercoagulation tendency, 21 (35%) confirmed having prothrombotic tendency and were referred to the thrombophilia testing. The results indicated hypercoagulation in 9 (15%) patients. The primary reconstruction plan of utilizing free flaps was abandoned for these patients and pedicled flaps or implants were preferred for reconstruction. These percentages emphasize the value of questioning risk factors and testing for hypercoagulation in patients seeking microsurgical breast reconstruction. We believe that detailed preoperative questioning of risk factors and appropriate testing according to prothrombotic tendency is beneficial in minimizing unpredictable flap failures and increasing rates of success. PMID:22744893
A multi-layer cellular automata approach for algorithmic generation of virtual case studies: VIBe.
Sitzenfrei, R; Fach, S; Kinzel, H; Rauch, W
2010-01-01
Analyses of case studies are used to evaluate new or existing technologies, measures or strategies with regard to their impact on the overall process. However, data availability is limited and hence, new technologies, measures or strategies can only be tested on a limited number of case studies. Owing to the specific boundary conditions and system properties of each single case study, results can hardly be generalized or transferred to other boundary conditions. virtual infrastructure benchmarking (VIBe) is a software tool which algorithmically generates virtual case studies (VCSs) for urban water systems. System descriptions needed for evaluation are extracted from VIBe whose parameters are based on real world case studies and literature. As a result VIBe writes Input files for water simulation software as EPANET and EPA SWMM. With such input files numerous simulations can be performed and the results can be benchmarked and analysed stochastically at a city scale. In this work the approach of VIBe is applied with parameters according to a section of the Inn valley and therewith 1,000 VCSs are generated and evaluated. A comparison of the VCSs with data of real world case studies shows that the real world case studies fit within the parameter ranges of the VCSs. Consequently, VIBe tackles the problem of limited availability of case study data. PMID:20057089
Self-reconfigurable approach for computation-intensive motion estimation algorithm in H.264/AVC
NASA Astrophysics Data System (ADS)
Lee, Jooheung; Ryu, Chul; Kim, Soontae
2012-04-01
The authors propose a self-reconfigurable approach to perform H.264/AVC variable block size motion estimation computation on field-programmable gate arrays. We use dynamic partial reconfiguration to change the hardware architecture of motion estimation during run-time. Hardware adaptation to meet the real-time computing requirements for the given video resolutions and frame rates is performed through self-reconfiguration. An embedded processor is used to control the reconfiguration of partial bitstreams of motion estimation adaptively. The partial bitstreams for different motion estimation computation arrays are compressed using LZSS algorithm. On-chip BlockRAM is used as a cache to pre-store the partial bitstreams so that run-time reconfiguration can be fully utilized. We designed a hardware module to fetch the pre-stored partial bitstream from BlockRAM to an internal configuration access port. Comparison results show that our motion estimation architecture improves toward data reuse, and the memory bandwidth overhead is reduced. Using our self-reconfigurable platform, the reconfiguration overhead can be removed and 367 MB/sec reconfiguration rate can be achieved. The experimental results show that the external memory accesses are reduced by 62.4% and it can operate at a frequency of 91.7 MHz.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Garbuzov, D.Z.; Martinelli, R.U.; Khalfin, V.; Lee, H.; Morris, N.A.; Taylor, G.C.; Connolly, J.C.; Charache, G.W.; DePoy, D.M.
1997-10-01
Heterojunction n-Al{sub 0.25}Ga{sub 0.75}As{sub 0.02}Sb{sub 098}/p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96} thermophotovoltaic (TPV) cells were grown by molecular-beam epitaxy on n-GaSb-substrates. In the spectral range from 1 {micro}m to 2.1 {micro}m these cells, as well as homojunction n-p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96} cells, have demonstrated internal quantum efficiencies exceeding 80%, despite about a 200 meV barrier in the conduction band at the heterointerface. Estimation shows that the thermal emission of the electrons photogenerated in p-region over this barrier can provide high efficiency for hetero-cells if the electron recombination time in p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96}is longer than 10 ns. Keeping the same internal efficiency as homojunction cells, hetero-cells provide a unique opportunity to decrease the dark forward current and thereby increase open circuit voltage (V{sub {proportional_to}}) and fill factor at a given illumination level. It is shown that the decrease of the forward current in hetero-cells is due to the lower recombination rate in n-type wider-bandgap space-charge region and to the suppression of the hole component of the forward current. The improvement in V{sub {proportional_to}} reaches 100% at illumination level equivalent to 1 mA/cm{sup 2} and it decreases to 5% at the highest illumination levels (2--3 A/cm{sup 2}), where the electron current component dominates in both the homo- and heterojunction cells. Values of V{sub {proportional_to}} as high as 310 meV have been obtained for a hetero-cell at illumination levels of 3 A/cm{sup 2}. Under this condition, the expected fill factor value is about 72% for a hetero-cell with improved series resistance. The heterojunction concept provides excellent prospects for further reduction of the dark forward current in TPV cells.
Prediction of Heart Attack Risk Using GA-ANFIS Expert System Prototype.
Begic Fazlic, Lejla; Avdagic, Aja; Besic, Ingmar
2015-01-01
The aim of this research is to develop a novel GA-ANFIS expert system prototype for classifying heart disease degree of a patient by using heart diseases attributes (features) and diagnoses taken in the real conditions. Thirteen attributes have been used as inputs to classifiers being based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. They are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validating of the novel GA-ANFIS system approach is performed in MATLAB environment. We compared GA-ANFIS and ANFIS results. The proposed GA-ANFIS model with the predicted value technique is more efficient when diagnosis of heart disease is concerned, as well the earlier method we got by ANFIS model. PMID:25980885
New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji
Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.
NASA Astrophysics Data System (ADS)
Zhang, Jingzhao; Zhang, Yiou; Tse, Kinfai; Deng, Bei; Xu, Hu; Zhu, Junyi
2016-05-01
The accurate absolute surface energies of (0001)/(000 1 ¯ ) surfaces of wurtzite structures are crucial in determining the thin film growth mode of important energy materials. However, the surface energies still remain to be solved due to the intrinsic difficulty of calculating the dangling bond energy of asymmetrically bonded surface atoms. In this study, we used a pseudo-hydrogen passivation method to estimate the dangling bond energy and calculate the polar surfaces of ZnO and GaN. The calculations were based on the pseudo chemical potentials obtained from a set of tetrahedral clusters or simple pseudo-molecules, using density functional theory approaches. The surface energies of (0001)/(000 1 ¯ ) surfaces of wurtzite ZnO and GaN that we obtained showed relatively high self-consistencies. A wedge structure calculation with a new bottom surface passivation scheme of group-I and group-VII elements was also proposed and performed to show converged absolute surface energy of wurtzite ZnO polar surfaces, and these results were also compared with the above method. The calculated results generally show that the surface energies of GaN are higher than those of ZnO, suggesting that ZnO tends to wet the GaN substrate, while GaN is unlikely to wet ZnO. Therefore, it will be challenging to grow high quality GaN thin films on ZnO substrates; however, high quality ZnO thin film on GaN substrate would be possible. These calculations and comparisons may provide important insights into crystal growth of the above materials, thereby leading to significant performance enhancements in semiconductor devices.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Dalzini, Annalisa; Bergamini, Christian; Biondi, Barbara; De Zotti, Marta; Panighel, Giacomo; Fato, Romana; Peggion, Cristina; Bortolus, Marco; Maniero, Anna Lisa
2016-01-01
Peptaibols are peculiar peptides produced by fungi as weapons against other microorganisms. Previous studies showed that peptaibols are promising peptide-based drugs because they act against cell membranes rather than a specific target, thus lowering the possibility of the onset of multi-drug resistance, and they possess non-coded α-amino acid residues that confer proteolytic resistance. Trichogin GA IV (TG) is a short peptaibol displaying antimicrobial and cytotoxic activity. In the present work, we studied thirteen TG analogues, adopting a multidisciplinary approach. We showed that the cytotoxicity is tuneable by single amino-acids substitutions. Many analogues maintain the same level of non-selective cytotoxicity of TG and three analogues are completely non-toxic. Two promising lead compounds, characterized by the introduction of a positively charged unnatural amino-acid in the hydrophobic face of the helix, selectively kill T67 cancer cells without affecting healthy cells. To explain the determinants of the cytotoxicity, we investigated the structural parameters of the peptides, their cell-binding properties, cell localization, and dynamics in the membrane, as well as the cell membrane composition. We show that, while cytotoxicity is governed by the fine balance between the amphipathicity and hydrophobicity, the selectivity depends also on the expression of negatively charged phospholipids on the cell surface. PMID:27039838
Dalzini, Annalisa; Bergamini, Christian; Biondi, Barbara; De Zotti, Marta; Panighel, Giacomo; Fato, Romana; Peggion, Cristina; Bortolus, Marco; Maniero, Anna Lisa
2016-01-01
Peptaibols are peculiar peptides produced by fungi as weapons against other microorganisms. Previous studies showed that peptaibols are promising peptide-based drugs because they act against cell membranes rather than a specific target, thus lowering the possibility of the onset of multi-drug resistance, and they possess non-coded α-amino acid residues that confer proteolytic resistance. Trichogin GA IV (TG) is a short peptaibol displaying antimicrobial and cytotoxic activity. In the present work, we studied thirteen TG analogues, adopting a multidisciplinary approach. We showed that the cytotoxicity is tuneable by single amino-acids substitutions. Many analogues maintain the same level of non-selective cytotoxicity of TG and three analogues are completely non-toxic. Two promising lead compounds, characterized by the introduction of a positively charged unnatural amino-acid in the hydrophobic face of the helix, selectively kill T67 cancer cells without affecting healthy cells. To explain the determinants of the cytotoxicity, we investigated the structural parameters of the peptides, their cell-binding properties, cell localization, and dynamics in the membrane, as well as the cell membrane composition. We show that, while cytotoxicity is governed by the fine balance between the amphipathicity and hydrophobicity, the selectivity depends also on the expression of negatively charged phospholipids on the cell surface. PMID:27039838
NASA Astrophysics Data System (ADS)
Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.
2015-07-01
When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.
2012-01-01
RA is a syndrome consisting of different pathogenetic subsets in which distinct molecular mechanisms may drive common final pathways. Recent work has provided proof of principle that biomarkers may be identified predictive of the response to targeted therapy. Based on new insights, an initial treatment algorithm is presented that may be used to guide treatment decisions in patients who have failed one TNF inhibitor. Key questions in this algorithm relate to the question whether the patient is a primary vs a secondary non-responder to TNF blockade and whether the patient is RF and/or anti-citrullinated peptide antibody positive. This preliminary algorithm may contribute to more cost-effective treatment of RA, and provides the basis for more extensive algorithms when additional data become available. PMID:21890615
Asgari, Mohammad; Soltani, Nasim Yahya; Riahi, Ali
2010-01-01
There are varieties of wideband direction-of-arrival (DOA) estimation algorithms. Their structure comprises a number of narrowband ones, each performs in one frequency in a given bandwidth, and then different responses should be combined in a proper way to yield true DOAs. Hence, wideband algorithms are always complex and so non-real-time. This paper investigates a method to derive a flat response of narrowband multiple signal classification (MUSIC) [R. O. Schmidt, IEEE Trans. Antennas Propag., 34, 276-280 (1986)] algorithm in the whole frequencies of given band. Therefore, required conditions of applying narrowband algorithm on wideband impinging signals will be given through a concrete analysis. It could be found out that array sensor locations are able to compensate the frequency variations to reach a flat response of DOAs in a specified wideband frequency. PMID:20058975
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively. PMID:24407307
A new approach to optic disc detection in human retinal images using the firefly algorithm.
Rahebi, Javad; Hardalaç, Fırat
2016-03-01
There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value. PMID:26093773
Symbolic integration of a class of algebraic functions. [by an algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
An algorithm is presented for the symbolic integration of a class of algebraic functions. This class consists of functions made up of rational expressions of an integration variable x and square roots of polynomials, trigonometric and hyperbolic functions of x. The algorithm is shown to consist of the following components:(1) the reduction of input integrands to conical form; (2) intermediate internal representations of integrals; (3) classification of outputs; and (4) reduction and simplification of outputs to well-known functions.
NASA Astrophysics Data System (ADS)
Lau, Erin-Ee-Lin; Chung, Wan-Young
A novel RSSI (Received Signal Strength Indication) refinement algorithm is proposed to enhance the resolution for indoor and outdoor real-time location tracking system. The proposed refinement algorithm is implemented in two separate phases. During the first phase, called the pre-processing step, RSSI values at different static locations are collected and processed to build a calibrated model for each reference node. Different measurement campaigns pertinent to each parameter in the model are implemented to analyze the sensitivity of RSSI. The propagation models constructed for each reference nodes are needed by the second phase. During the next phase, called the runtime process, real-time tracking is performed. Smoothing algorithm is proposed to minimize the dynamic fluctuation of radio signal received from each reference node when the mobile target is moving. Filtered RSSI values are converted to distances using formula calibrated in the first phase. Finally, an iterative trilateration algorithm is used for position estimation. Experiments relevant to the optimization algorithm are carried out in both indoor and outdoor environments and the results validated the feasibility of proposed algorithm in reducing the dynamic fluctuation for more accurate position estimation.
NASA Astrophysics Data System (ADS)
Dalzell, B. J.; Gassman, P. W.; Kling, C.
2015-12-01
In the Minnesota River Basin, sediments originating from failing stream banks and bluffs account for the majority of the riverine load and contribute to water quality impairments in the Minnesota River as well as portions of the Mississippi River upstream of Lake Pepin. One approach for mitigating this problem may be targeted wetland restoration in Minnesota River Basin tributaries in order to reduce the magnitude and duration of peak flow events which contribute to bluff and stream bank failures. In order to determine effective arrangements and properties of wetlands to achieve peak flow reduction, we are employing a genetic algorithm approach coupled with a SWAT model of the Cottonwood River, a tributary of the Minnesota River. The genetic algorithm approach will evaluate combinations of basic wetland features as represented by SWAT: surface area, volume, contributing area, and hydraulic conductivity of the wetland bottom. These wetland parameters will be weighed against economic considerations associated with land use trade-offs in this agriculturally productive landscape. Preliminary results show that the SWAT model is capable of simulating daily hydrology very well and genetic algorithm evaluation of wetland scenarios is ongoing. Anticipated results will include (1) combinations of wetland parameters that are most effective for reducing peak flows, and (2) evaluation of economic trade-offs between wetland restoration, water quality, and agricultural productivity in the Cottonwood River watershed.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
Results with an Algorithmic Approach to Hybrid Repair of the Aortic Arch
Andersen, Nicholas D.; Williams, Judson B.; Hanna, Jennifer M.; Shah, Asad A.; McCann, Richard L.; Hughes, G. Chad
2013-01-01
Objective Hybrid repair of the transverse aortic arch may allow for aortic arch repair with reduced morbidity in patients who are suboptimal candidates for conventional open surgery. Here, we present our results with an algorithmic approach to hybrid arch repair, based upon the extent of aortic disease and patient comorbidities. Methods Between August 2005 and January 2012, 87 patients underwent hybrid arch repair by three principal procedures: zone 1 endograft coverage with extra-anatomic left carotid revascularization (zone 1, n=19), zone 0 endograft coverage with aortic arch debranching (zone 0, n=48), or total arch replacement with staged stented elephant trunk completion (stented elephant trunk, n=20). Results The mean patient age was 64 years and the mean expected in-hospital mortality rate was 16.3% as calculated by the EuroSCORE II. 22% (n=19) of operations were non-elective. Sternotomy, cardiopulmonary bypass, and deep hypothermic circulatory arrest were required in 78% (n=68), 45% (n=39), and 31% (n=27) of patients, respectively, to allow for total arch replacement, arch debranching, or other concomitant cardiac procedures, including ascending ± hemi-arch replacement in 17% (n=8) of patients undergoing zone 0 repair. All stented elephant trunk procedures (n=20) and 19% (n=9) of zone 0 procedures were staged, with 41% (n=12) of patients undergoing staged repair during a single hospitalization. The 30-day/in-hospital rates of stroke and permanent paraplegia/paraparesis were 4.6% (n=4) and 1.2% (n=1), respectively. Three of 27 (11.1%) patients with native ascending aorta zone 0 proximal landing zone experienced retrograde type A dissection following endograft placement. The overall in-hospital mortality rate was 5.7% (n=5), however, 30-day/in-hospital mortality increased to 14.9% (n=13) due to eight 30-day out-of-hospital deaths. Native ascending aorta zone 0 endograft placement was found to be the only univariate predictor of 30-day/in-hospital mortality
NASA Technical Reports Server (NTRS)
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2011-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
The M-OLAP Cube Selection Problem: A Hyper-polymorphic Algorithm Approach
NASA Astrophysics Data System (ADS)
Loureiro, Jorge; Belo, Orlando
OLAP systems depend heavily on the materialization of multidimensional structures to speed-up queries, whose appropriate selection constitutes the cube selection problem. However, the recently proposed distribution of OLAP structures emerges to answer new globalization's requirements, capturing the known advantages of distributed databases. But this hardens the search for solutions, especially due to the inherent heterogeneity, imposing an extra characteristic of the algorithm that must be used: adaptability. Here the emerging concept known as hyper-heuristic can be a solution. In fact, having an algorithm where several (meta-)heuristics may be selected under the control of a heuristic has an intrinsic adaptive behavior. This paper presents a hyper-heuristic polymorphic algorithm used to solve the extended cube selection and allocation problem generated in M-OLAP architectures.
Gokhale, Nikhil S
2016-01-01
Vernal keratoconjunctivitis is an ocular allergy that is common in the pediatric age group. It is often chronic, severe, and nonresponsive to the available treatment options. Management of these children is difficult and often a dilemma for the practitioner. There is a need to simplify and standardize its management. To achieve this goal, we require a grading system to judge the severity of inflammation and an algorithm to select the appropriate medications. This article provides a simple and practically useful grading system and a stepladder algorithm for systematic treatment of these patients. Use of appropriate treatment modalities can reduce treatment and disease-related complications. PMID:27050351
Genetic algorithm optimization of atomic clusters
Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E. |
1996-12-31
The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process.
NASA Astrophysics Data System (ADS)
Meyer, Ulrich; Negoescu, Andrei; Weichert, Volker
Despite disillusioning worst-case behavior, classic algorithms for single-source shortest-paths (SSSP) like Bellman-Ford are still being used in practice, especially due to their simple data structures. However, surprisingly little is known about the average-case complexity of these approaches. We provide new theoretical and experimental results for the performance of classic label-correcting SSSP algorithms on graph classes with non-negative random edge weights. In particular, we prove a tight lower bound of Ω(n 2) for the running times of Bellman-Ford on a class of sparse graphs with O(n) nodes and edges; the best previous bound was Ω(n 4/3 - ɛ ). The same improvements are shown for Pallottino's algorithm. We also lift a lower bound for the approximate bucket implementation of Dijkstra's algorithm from Ω(n logn / loglogn) to Ω(n 1.2 - ɛ ). Furthermore, we provide an experimental evaluation of our new graph classes in comparison with previously used test inputs.
Premaladha, J; Ravichandran, K S
2016-04-01
Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies. PMID:26872778
Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin
2016-01-01
Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323
A Low-Tech, Hands-On Approach To Teaching Sorting Algorithms to Working Students.
ERIC Educational Resources Information Center
Dios, R.; Geller, J.
1998-01-01
Focuses on identifying the educational effects of "activity oriented" instructional techniques. Examines which instructional methods produce enhanced learning and comprehension. Discusses the problem of learning "sorting algorithms," a major topic in every Computer Science curriculum. Presents a low-tech, hands-on teaching method for sorting…
Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction
NASA Technical Reports Server (NTRS)
Velusamy, T.; Marsh, K. A.; Ware, B.
2005-01-01
TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.
Aorigele; Zeng, Weiming; Hong, Xiaomin
2016-01-01
Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323
Application of genetic algorithms to tuning fuzzy control systems
NASA Technical Reports Server (NTRS)
Espy, Todd; Vombrack, Endre; Aldridge, Jack
1993-01-01
Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.
NASA Astrophysics Data System (ADS)
Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.
2016-02-01
This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.
1975-01-01
Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.
A nondamaging electron microscopy approach to map In distribution in InGaN light-emitting diodes
NASA Astrophysics Data System (ADS)
Özdöl, V. B.; Koch, C. T.; van Aken, P. A.
2010-09-01
Dark-field inline electron holography and, for comparison, high-resolution transmission electron microscopy are used to investigate the distribution of indium in GaN-based commercial high-efficiency green light-emitting diodes consisting of InGaN multiquantum wells (QWs). Owing to the low electron doses used in inline holography measurements; this technique allows to map the indium distribution without introducing any noticeable electron beam-induced damage which is hardly avoidable in other quantitative transmission electron microscopy methods. Combining the large field of view with a spatial resolution better than 1 nm, we show that the InGaN QWs exhibit random alloy nature without any evidence of nanometer scale gross indium clustering in the whole active region.
Multi-objective optimization of lithium-ion battery model using genetic algorithm approach
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Wang, Lixin; Hinds, Gareth; Lyu, Chao; Zheng, Jun; Li, Junfu
2014-12-01
A multi-objective parameter identification method for modeling of Li-ion battery performance is presented. Terminal voltage and surface temperature curves at 15 °C and 30 °C are used as four identification objectives. The Pareto fronts of two types of Li-ion battery are obtained using the modified multi-objective genetic algorithm NSGA-II and the final identification results are selected using the multiple criteria decision making method TOPSIS. The simulated data using the final identification results are in good agreement with experimental data under a range of operating conditions. The validation results demonstrate that the modified NSGA-II and TOPSIS algorithms can be used as robust and reliable tools for identifying parameters of multi-physics models for many types of Li-ion batteries.
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
NASA Astrophysics Data System (ADS)
Riha, Stefan; Krawczyk, Harald
2011-11-01
Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823
NASA Astrophysics Data System (ADS)
Sans, J. A.; Sanchez-Royo, J. F.; Tobias-Rossell, G.; Canadell-Casanova, E.; Segura, A.
2010-01-01
Previous results show that the discrepancy between the experimental measurements and standard model theoretical calculations can be related with our assumption of unperturbed dispersion in the conduction band of the material. For overcoming this limitation, it has been proposed the band anti-crossing (BAC) model, based on the interaction of an antibonding state of Ga-O with the conduction band. Extending this model to other doping elements from III-group (Al, Ga and In), coherent results for the optical band-gap energy were obtained. These results have been supported by theoretical calculus of the electronic band structure, carried out by numerical atomic orbitals density functional theory (DFT) approach. This method is designed for efficient calculations in large systems and implemented in the SIESTA code.
ERIC Educational Resources Information Center
Uno, Mariko
2016-01-01
This study investigates the emergence and development of the discourse-pragmatic functions of the Japanese subject markers "wa" and "ga" from a usage-based perspective (Tomasello, 2000). The use of each marker in longitudinal speech data for four Japanese children from 1;0 to 3;1 and their parents available in the CHILDES…
NASA Astrophysics Data System (ADS)
Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong
2015-07-01
The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.
A lake detection algorithm (LDA) using Landsat 8 data: A comparative approach in glacial environment
NASA Astrophysics Data System (ADS)
Bhardwaj, Anshuman; Singh, Mritunjay Kumar; Joshi, P. K.; Snehmani; Singh, Shaktiman; Sam, Lydia; Gupta, R. D.; Kumar, Rajesh
2015-06-01
Glacial lakes show a wide range of turbidity. Owing to this, the normalized difference water indices (NDWIs) as proposed by many researchers, do not give appropriate results in case of glacial lakes. In addition, the sub-pixel proportion of water and use of different optical band combinations are also reported to produce varying results. In the wake of the changing climate and increasing GLOFs (glacial lake outburst floods), there is a need to utilize wide optical and thermal capabilities of Landsat 8 data for the automated detection of glacial lakes. In the present study, the optical and thermal bandwidths of Landsat 8 data were explored along with the terrain slope parameter derived from Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model Version2 (ASTER GDEM V2), for detecting and mapping glacial lakes. The validation of the algorithm was performed using manually digitized and subsequently field corrected lake boundaries. The pre-existing NDWIs were also evaluated to determine the supremacy and the stability of the proposed algorithm for glacial lake detection. Two new parameters, LDI (lake detection index) and LF (lake fraction) were proposed to comment on the performances of the indices. The lake detection algorithm (LDA) performed best in case of both, mixed lake pixels and pure lake pixels with no false detections (LDI = 0.98) and very less areal underestimation (LF = 0.73). The coefficient of determination (R2) between areal extents of lake pixels, extracted using the LDA and the actual lake area, was very high (0.99). With understanding of the terrain conditions and slight threshold adjustments, this work can be replicated for any mountainous region of the world.
Implementation of genetic algorithm for distribution systems loss minimum re-configuration
Nara, K.; Shiose, A. ); Kitagawa, M.; Ishihara, T. )
1992-08-01
In this paper, a distribution systems loss minimum reconfiguration method by genetic algorithm is proposed. The problem is a complex mixed integer programming problem and is very difficult to solve by a mathematical programming approach. A genetic algorithm (GA) is a search or optimization algorithm based on the mechanics of natural selection and natural genetics. Since GA is suitable to solve combinatorial optimization problems, it can be successfully applied to problems of loss minimum in distribution systems. Numerical examples demonstrate the validity and effectiveness of the proposed methodology.
Chaos-based image encryption using a hybrid genetic algorithm and a DNA sequence
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi
2014-05-01
The paper studies a recently developed evolutionary-based image encryption algorithm. A novel image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a genetic algorithm (GA) and a logistic map is proposed. This study uses DNA and logistic map functions to create the number of initial DNA masks and applies GA to determine the best mask for encryption. The significant advantage of this approach is improving the quality of DNA masks to obtain the best mask that is compatible with plain images. The experimental results and computer simulations both confirm that the proposed scheme not only demonstrates excellent encryption but also resists various typical attacks.
Berkolaiko, G.; Kuipers, J.
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
NASA Astrophysics Data System (ADS)
Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun
2016-03-01
A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration.
NASA Astrophysics Data System (ADS)
Inclan, Eric; Lassester, Jack; Geohegan, David; Yoon, Mina
Research in TiO2 materials is highly relevant to energy and device applications, however, precise control of their morphologies and characterization are still a grand challenge in the field. We developed and applied a hybrid optimization algorithm to explore configuration spaces of energetically metastable TiO2. Our approach was to minimize the total energy of TiO2 lusters in order to identify the energy landscape of plausible (TiO2)n (n = 1-100). The hybrid algorithm retained good agreement with a regression on structures published in literature up to n = 25. Using first-principles density functional theory, we analyze basic properties of the hybrid-algorithm generated TiO2 nanoparticles. Our results show the expected convergence to bulk material characteristics as the cluster size increases in that the band gap varies with respect to the size of the nanocluster. The nanoclusters trended toward compact, low surface area structures that share characteristics of the bulk, namely octahedral microstructures as the nanoclusters increased in size. Our study helps in better identifying and characterizing experimentally observed structures. This work is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun
2016-03-01
A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770
Anor, Tomer; Madsen, Joseph R; Dupont, Pierre
2011-05-01
We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples. PMID:22270831
Anor, Tomer; Madsen, Joseph R.; Dupont, Pierre
2011-01-01
We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples. PMID:22270831
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-01-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth’s earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions. PMID:25901305
NASA Astrophysics Data System (ADS)
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions.
Cold header machine process monitoring using a genetic algorithm designed neural network approach
NASA Astrophysics Data System (ADS)
dos Reis, Henrique L. M.; Voegele, Aaron C.; Cook, David B.
1999-12-01
In cold heading manufacturing processes, complete or partial fracture of the punch-pin leads to production of out-of-tolerance parts. A process monitoring system has been developed to assure that out-of-tolerance parts do not contaminate the batch of acceptable parts. A four-channel data acquisition system was assembled to collect and store the acoustic signal generated during the manufacturing process. A genetic algorithm was designed to select the smallest subset of waveform features necessary to develop a robust artificial neural network that could differentiate among the various cold head machine conditions, including complete or partial failure of the punch pin. The developed monitoring system is able to terminate production within seconds of punch pin failure using only four waveform features.
A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Chae, Han Gil
Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the
Zdunek, Rafal; Cichocki, Andrzej
2008-01-01
Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, Barzilai-Borwein gradient projection, projected sequential subspace optimization (PSESOP), interior-point Newton (IPN), and sequential coordinate-wise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signal-to-interference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals. PMID:18628948
NASA Astrophysics Data System (ADS)
Benard, N.; Pons-Prats, J.; Periaux, J.; Bugeda, G.; Braud, P.; Bonnet, J. P.; Moreau, E.
2016-02-01
The potential benefits of active flow control are no more debated. Among many others applications, flow control provides an effective mean for manipulating turbulent separated flows. Here, a nonthermal surface plasma discharge (dielectric barrier discharge) is installed at the step corner of a backward-facing step ( U 0 = 15 m/s, Re h = 30,000, Re θ = 1650). Wall pressure sensors are used to estimate the reattaching location downstream of the step (objective function #1) and also to measure the wall pressure fluctuation coefficients (objective function #2). An autonomous multi-variable optimization by genetic algorithm is implemented in an experiment for optimizing simultaneously the voltage amplitude, the burst frequency and the duty cycle of the high-voltage signal producing the surface plasma discharge. The single-objective optimization problems concern alternatively the minimization of the objective function #1 and the maximization of the objective function #2. The present paper demonstrates that when coupled with the plasma actuator and the wall pressure sensors, the genetic algorithm can find the optimum forcing conditions in only a few generations. At the end of the iterative search process, the minimum reattaching position is achieved by forcing the flow at the shear layer mode where a large spreading rate is obtained by increasing the periodicity of the vortex street and by enhancing the vortex pairing process. The objective function #2 is maximized for an actuation at half the shear layer mode. In this specific forcing mode, time-resolved PIV shows that the vortex pairing is reduced and that the strong fluctuations of the wall pressure coefficients result from the periodic passages of flow structures whose size corresponds to the height of the step model.
Wong, Brian J. F.; Karmi, Koohyar; Devcic, Zlatko; McLaren, Christine E.; Chen, Wen-Pin
2013-01-01
Objectives The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Study Design Basic research study incorporating focus group evaluations. Methods Digital images were acquired of 250 female volunteers (18–25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18–25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cosmetology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. Results The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (±0.73), 5.50 (±0.62), 6.23 (±0.31), and 6.39 (±0.24) for P and F1–F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness
NASA Astrophysics Data System (ADS)
Keilis-Borok, V. I.; Soloviev, A.; Gabrielov, A.
2011-12-01
We describe a uniform approach to predicting different extreme events, also known as critical phenomena, disasters, or crises. The following types of such events are considered: strong earthquakes; economic recessions (their onset and termination); surges of unemployment; surges of crime; and electoral changes of the governing party. A uniform approach is possible due to the common feature of these events: each of them is generated by a certain hierarchical dissipative complex system. After a coarse-graining, such systems exhibit regular behavior patterns; we look among them for "premonitory patterns" that signal the approach of an extreme event. We introduce methodology, based on the optimal control theory, assisting disaster management in choosing optimal set of disaster preparedness measures undertaken in response to a prediction. Predictions with their currently realistic (limited) accuracy do allow preventing a considerable part of the damage by a hierarchy of preparedness measures. Accuracy of prediction should be known, but not necessarily high.
Feature optimization in chemometric algorithms for explosives detection
NASA Astrophysics Data System (ADS)
Pinkham, Daniel W.; Bonick, James R.; Woodka, Marc D.
2012-06-01
This paper details the use of a genetic algorithm (GA) as a method to preselect spectral feature variables for chemometric algorithms, using spectroscopic data gathered on explosive threat targets. The GA was applied to laserinduced breakdown spectroscopy (LIBS) and ultraviolet Raman spectroscopy (UVRS) data, in which the spectra consisted of approximately 10000 and 1000 distinct spectral values, respectively. The GA-selected variables were examined using two chemometric techniques: multi-class linear discriminant analysis (LDA) and support vector machines (SVM), and the performance from LDA and SVM was fed back to the GA through a fitness function evaluation. In each case, an optimal selection of features was achieved within 20 generations of the GA, with few improvements thereafter. The GA selected chemically significant signatures, such as oxygen and hydron peaks from LIBS spectra and characteristic Raman shifts for AN, TNT, and PETN. Successes documented herein suggest that this GA approach could be useful in analyzing spectroscopic data in complex environments, where the discriminating features of desired targets are not yet fully understood.
García-Pedrajas, Nicolás; Ortiz-Boyer, Domingo; Hervás-Martínez, César
2006-05-01
In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator. PMID:16343847
ERIC Educational Resources Information Center
Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.
1997-01-01
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
NASA Technical Reports Server (NTRS)
Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)
2002-01-01
This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.
NASA Astrophysics Data System (ADS)
Hashemi-Dezaki, Hamed; Mohammadalizadeh-Shabestary, Masoud; Askarian-Abyaneh, Hossein; Rezaei-Jegarluei, Mohammad
2014-01-01
In electrical distribution systems, a great amount of power are wasting across the lines, also nowadays power factors, voltage profiles and total harmonic distortions (THDs) of most loads are not as would be desired. So these important parameters of a system play highly important role in wasting money and energy, and besides both consumers and sources are suffering from a high rate of distortions and even instabilities. Active power filters (APFs) are innovative ideas for solving of this adversity which have recently used instantaneous reactive power theory. In this paper, a novel method is proposed to optimize the allocation of APFs. The introduced method is based on the instantaneous reactive power theory in vectorial representation. By use of this representation, it is possible to asses different compensation strategies. Also, APFs proper placement in the system plays a crucial role in either reducing the losses costs and power quality improvement. To optimize the APFs placement, a new objective function has been defined on the basis of five terms: total losses, power factor, voltage profile, THD and cost. Genetic algorithm has been used to solve the optimization problem. The results of applying this method to a distribution network illustrate the method advantages.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2011-12-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2012-01-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.
Leylavi Shoushtari, Ali
2016-01-01
Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion. PMID:27186507
An Algorithmic Approach to the Management of Recurrent Lateral Patellar Dislocation.
Weber, Alexander E; Nathani, Amit; Dines, Joshua S; Allen, Answorth A; Shubin-Stein, Beth E; Arendt, Elizabeth A; Bedi, Asheesh
2016-03-01
High-level evidence supports nonoperative treatment for first-time lateral acute patellar dislocations. Surgical intervention is often indicated for recurrent dislocations. Recurrent instability is often multifactorial and can be the result of a combination of coronal limb malalignment, patella alta, malrotation secondary to internal femoral or external tibial torsion, a dysplastic trochlea, or disrupted and weakened medial soft tissue, including the medial patellofemoral ligament (MPFL) and the vastus medialis obliquus. MPFL reconstruction requires precise graft placement for restoration of anatomy and minimal graft tension. MPFL reconstruction is safe to perform in skeletally immature patients and in revision surgical settings. Distal realignment procedures should be implemented in recurrent instability associated with patella alta, increased tibial tubercle-trochlear groove distances, and lateral and distal patellar chondrosis. Groove-deepening trochleoplasty for Dejour type-B and type-D dysplasia or a lateral elevation or proximal recession trochleoplasty for Dejour type-C dysplasia may be a component of the treatment algorithm; however, clinical outcome data are lacking. In addition, trochleoplasty is technically challenging and has a risk of substantial complications. PMID:26935465
Life-histories from Landsat: Algorithmic approaches to distilling Earth's recent ecological dynamics
NASA Astrophysics Data System (ADS)
Kennedy, R. E.; Yang, Z.; Braaten, J.; Cohen, W. B.; Ohmann, J.; Gregory, M.; Roberts, H.; Meigs, G. W.; Nelson, P.; Pfaff, E.
2012-12-01
As the longest running continuous satellite Earth-observation record, data from the Landsat family of sensors have the potential to uniquely reveal temporal dynamics critical to many terrestrial disciplines. The convergence of a free-data access policy in the late 2000s with a rapid rise in computing and storage capacity has highlighted an increasinagly common challenge: effective distillation of information from large digital datasets. Here, we describe how an algorithmic workflow informed by basic understanding of ecological processes is being used to convert multi-terabyte image time-series datasets into concise renditions of landscape dynamics. Using examples from our own work, we show how these are in turn applied to monitor vegetative disturbance and growth dynamics in national parks, to evaluate effectiveness of natural resource policy in national forests, to constrain and inform biogeochemical models, to measure carbon impacts of natural and anthropogenic stressors, to assess impacts of land use change on threatened species, to educate and inform students, and to better characterize complex links between changing climate, insect pathogens, and wildfire in forests.
Improvements in the sensibility of MSA-GA tool using COFFEE objective function
NASA Astrophysics Data System (ADS)
Amorim, A. R.; Zafalon, G. F. D.; Neves, L. A.; Pinto, A. R.; Valêncio, C. R.; Machado, J. M.
2015-01-01
The sequence alignment is one of the most important tasks in Bioinformatics, playing an important role in the sequences analysis. There are many strategies to perform sequence alignment, since those use deterministic algorithms, as dynamic programming, until those ones, which use heuristic algorithms, as Progressive, Ant Colony (ACO), Genetic Algorithms (GA), Simulated Annealing (SA), among others. In this work, we have implemented the objective function COFFEE in the MSA-GA tool, in substitution of Weighted Sum-of-Pairs (WSP), to improve the final results. In the tests, we were able to verify the approach using COFFEE function achieved better results in 81% of the lower similarity alignments when compared with WSP approach. Moreover, even in the tests with more similar sets, the approach using COFFEE was better in 43% of the times.
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
Burak, Kelly W; Kneteman, Norman M
2010-01-01
Hepatocellular carcinoma (HCC) is one of only a few malignancies with an increasing incidence in North America. Because the vast majority of HCCs occur in the setting of a cirrhotic liver, management of this malignancy is best performed in a multidisciplinary group that recognizes the importance of liver function, as well as patient and tumour characteristics. The Barcelona Clinic Liver Cancer (BCLC) staging system is preferred for HCC because it incorporates the tumour characteristics (ie, tumour-node-metastasis stage), the patient’s performance status and liver function according to the Child-Turcotte-Pugh classification, and then links the BCLC stage to recommended therapeutic interventions. However, the BCLC algorithm does not recognize the potential role of radiofrequency ablation for very early stage HCC, the expanding role of liver transplantation in the management of HCC, the role of transarterial chemoembolization in single large tumours, the potential role of transarterial radioembolization with 90Yttrium and the limited evidence for using sorafenib in Child-Turcotte-Pugh class B cirrhotic patients. The current review article presents an evidence-based approach to the multidisciplinary management of HCC along with a new algorithm for the management of HCC that incorporates the BCLC staging system and the authors’ local selection criteria for resection, ablative techniques, liver transplantation, transarterial chemoembolization, transarterial radioembolization and sorafenib in Alberta. PMID:21157578
NASA Astrophysics Data System (ADS)
Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.
2014-09-01
Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.
NASA Astrophysics Data System (ADS)
Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.
2015-09-01
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
Chou, Ting-Chao
2011-01-01
The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development. PMID:22016837
NASA Astrophysics Data System (ADS)
Thatcher, Evan; Stanton, Christopher; Ishioka, Kunie; Basak, Amlan; Petek, Hrvoje
2015-03-01
We present results from a joint experimental and theoretical study exploring the excitation of coupled plasmon-phonon modes in GaAs. In contrast to previous coherent phonon studies in GaAs where electrons were generated primarily in the Γ valley (E0 gap), we use a pump-probe technique with a 10 fs pulse width and a shorter 400 nm laser wavelength to photoexcite electrons predominately in the L valley (E1 gap). As a result: i) damping of the electron-hole plasma is faster and ii) diffusion of the carriers from the surface becomes important owing to the shorter absorption length. The probe pulses measure the time-dependent changes to the reflectivity due to the coupled plasmon-phonon modes created by the ultrafast photoexcitation and the subsequent depletion field screening. To model this, we solve for the time and density dependent coupled-mode frequencies allowing for ambipolar diffusion. Simulation of the coupled plasmon-phonon dynamics allows for comparison with, and a better understanding of experiments. Supported by the NSF through Grants CHE-0650756, DMR-1311845, and DMR-1311849.
Ab-initio study of magnetic properties and phase transitions in Ga (Mn) N with Monte Carlo approach
NASA Astrophysics Data System (ADS)
Sbai, Y.; Ait Raiss, A.; Salmani, E.; Bahmad, L.; Benyoussef, A.
2015-12-01
On the basis of ab-initio calculations and Monte Carlo simulations the magnetic and electronic properties of Gallium nitride (GaN) doped with the transition metal Manganese (Mn) were studied. The ab initio calculations were done using the AKAI-KKR-CPA method within the Local Density Approximation (LDA) approximation. We doped our Diluted Magnetic Semiconductor (DMS), with different concentrations of magnetic impurities Mn and plotted the density of state (DOS) for each one. Showing a half-metallic behavior and ferromagnetic state especially for Ga0.95Mn0.05N making this DMS a strong candidate for spintronic applications. Moreover, the magnetization and susceptibility of our system as a function of the temperature has been calculated and give for various system size L to study the size effect. In addition, the transition temperature was deduced from the peak of the susceptibility. The Ab initio results are in good agreement with literature especially for (x=0.05) of Mn which gives the most interesting results.
NASA Astrophysics Data System (ADS)
Patra, Rusha; Dutta, Pranab K.
2015-07-01
Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.
NASA Astrophysics Data System (ADS)
Kurster, M.
1993-07-01
A newly developed method for the Doppler imaging of star spot distributions on active late-type stars is presented. It comprises an algorithm particularly adapted to the (discrete) Doppler imaging problem (including eclipses) and is very efficient in determining the positions and shapes of star spots. A variety of tests demonstrates the capabilities as well as the limitations of the method by investigating the effects that uncertainties in various stellar parameters have on the image reconstruction. Any systematic errors within the reconstructed image are found to be a result of the ill-posed nature of the Doppler imaging problem and not a consequence of the adopted approach. The largest uncertainties are found with respect to the dynamical range of the image (brightness or temperature contrast). This kind of uncertainty is of little effect for studies of star spot migrations with the objectives of determining differential rotation and butterfly diagrams for late-type stars.
NASA Astrophysics Data System (ADS)
Darne, Chinmay; Lu, Yujie; Sevick-Muraca, Eva M.
2014-01-01
Emerging fluorescence and bioluminescence tomography approaches have several common, yet several distinct features from established emission tomographies of PET and SPECT. Although both nuclear and optical imaging modalities involve counting of photons, nuclear imaging techniques collect the emitted high energy (100-511 keV) photons after radioactive decay of radionuclides while optical techniques count low-energy (1.5-4.1 eV) photons that are scattered and absorbed by tissues requiring models of light transport for quantitative image reconstruction. Fluorescence imaging has been recently translated into clinic demonstrating high sensitivity, modest tissue penetration depth, and fast, millisecond image acquisition times. As a consequence, the promise of quantitative optical tomography as a complement of small animal PET and SPECT remains high. In this review, we summarize the different instrumentation, methodological approaches and schema for inverse image reconstructions for optical tomography, including luminescence and fluorescence modalities, and comment on limitations and key technological advances needed for further discovery research and translation.
mRAISE: an alternative algorithmic approach to ligand-based virtual screening.
von Behren, Mathias M; Bietz, Stefan; Nittinger, Eva; Rarey, Matthias
2016-08-01
Ligand-based virtual screening is a well established method to find new lead molecules in todays drug discovery process. In order to be applicable in day to day practice, such methods have to face multiple challenges. The most important part is the reliability of the results, which can be shown and compared in retrospective studies. Furthermore, in the case of 3D methods, they need to provide biologically relevant molecular alignments of the ligands, that can be further investigated by a medicinal chemist. Last but not least, they have to be able to screen large databases in reasonable time. Many algorithms for ligand-based virtual screening have been proposed in the past, most of them based on pairwise comparisons. Here, a new method is introduced called mRAISE. Based on structural alignments, it uses a descriptor-based bitmap search engine (RAISE) to achieve efficiency. Alignments created on the fly by the search engine get evaluated with an independent shape-based scoring function also used for ranking of compounds. The correct ranking as well as the alignment quality of the method are evaluated and compared to other state of the art methods. On the commonly used Directory of Useful Decoys dataset mRAISE achieves an average area under the ROC curve of 0.76, an average enrichment factor at 1 % of 20.2 and an average hit rate at 1 % of 55.5. With these results, mRAISE is always among the top performing methods with available data for comparison. To access the quality of the alignments calculated by ligand-based virtual screening methods, we introduce a new dataset containing 180 prealigned ligands for 11 diverse targets. Within the top ten ranked conformations, the alignment closest to X-ray structure calculated with mRAISE has a root-mean-square deviation of less than 2.0 Å for 80.8 % of alignment pairs and achieves a median of less than 2.0 Å for eight of the 11 cases. The dataset used to rate the quality of the calculated alignments is freely available
Bause, Fabian; Walther, Andrea; Rautenberg, Jens; Henning, Bernd
2013-12-01
For the modeling and simulation of wave propagation in geometrically simple waveguides such as plates or rods, one may employ the analytical global matrix method. That is, a certain (global) matrix depending on the two parameters wavenumber and frequency is built. Subsequently, one must calculate all parameter pairs within the domain of interest where the global matrix becomes singular. For this purpose, one could compute all roots of the determinant of the global matrix when the two parameters vary in the given intervals. This requirement to calculate all roots is actually the method's most concerning restriction. Previous approaches are based on so-called mode-tracers, which use the physical phenomenon that solutions, i.e., roots of the determinant of the global matrix, appear in a certain pattern, the waveguide modes, to limit the root-finding algorithm's search space with respect to consecutive solutions. In some cases, these reductions of the search space yield only an incomplete set of solutions, because some roots may be missed as a result of uncertain predictions. Therefore, we propose replacement of the mode-tracer approach with a suitable version of an interval- Newton method. To apply this interval-based method, we extended the interval and derivative computation provided by a numerical computing environment such that corresponding information is also available for Bessel functions used in circular models of acoustic waveguides. We present numerical results for two different scenarios. First, a polymeric cylindrical waveguide is simulated, and second, we show simulation results of a one-sided fluid-loaded plate. For both scenarios, we compare results obtained with the proposed interval-Newton algorithm and commercial software. PMID:24297025
Kurtz, S.; Wanlass, M.; Kramer, C.; Young, M.; Geisz, J.; Ward, S.; Duda, A.; Moriarty, T.; Carapella, J.; Ahrenkiel, P.; Emery. K.; Jones, K.; Romero, M.; Kibbler, A.; Olson, J.; Friedman, D.; McMahon, W.; Ptak, A.
2005-11-01
GaInP/GaAs/GaInAs three-junction cells are grown in an inverted configuration on GaAs, allowing high quality growth of the lattice matched GaInP and GaAs layers before a grade is used for the 1-eV GaInAs layer. Using this approach an efficiency of 37.9% was demonstrated.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
An approach to the development and analysis of wind turbine control algorithms
Wu, K.C.
1998-03-01
The objective of this project is to develop the capability of symbolically generating an analytical model of a wind turbine for studies of control systems. This report focuses on a theoretical formulation of the symbolic equations of motion (EOMs) modeler for horizontal axis wind turbines. In addition to the power train dynamics, a generic 7-axis rotor assembly is used as the base model from which the EOMs of various turbine configurations can be derived. A systematic approach to generate the EOMs is presented using d`Alembert`s principle and Lagrangian dynamics. A Matlab M file was implemented to generate the EOMs of a two-bladed, free yaw wind turbine. The EOMs will be compared in the future to those of a similar wind turbine modeled with the YawDyn code for verification. This project was sponsored by Sandia National Laboratories as part of the Adaptive Structures and Control Task. This is the final report of Sandia Contract AS-0985.
NASA Technical Reports Server (NTRS)
Mattar, F. P.; Teichmann, J.; Bissonnette, L. R.; Maccormack, R. W.
1979-01-01
The paper presents a three-dimensional analysis of the nonlinear light matter interaction in a hydrodynamic context. It is reported that the resulting equations are a generalization of the Navier-Stokes equations subjected to an internal potential which depends solely upon the fluid density. In addition, three numerical approaches are presented to solve the governing equations using an extension of McCormack predict-corrector scheme. These are a uniform grid, a dynamic rezoned grid, and a splitting technique. It is concluded that the use of adaptive mapping and splitting techniques with McCormack two-level predictor-corrector scheme results in an efficient and reliable code whose storage requirements are modest compared with other second order methods of equal accuracy.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Leckenby, J I; Ghali, S; Butler, D P; Grobbelaar, A O
2015-05-01
Facial palsy patients suffer an array of problems ranging from functional to psychological issues. With regard to the eye, lacrimation, lagophthalmos and the inability to spontaneously blink are the main symptoms and if left untreated can compromise the cornea and vision. There are a multitude of treatment modalities available and the surgeon has the challenging prospect of choosing the correct intervention to yield the best outcome for a patient. The accurate assessment of the eye in facial paralysis is described and by approaching the brow and the eye separately the treatment options and indications are discussed having been broken down into static and dynamic modalities. Based on our unit's experience of more than 35 years and 1000 cases of facial palsy, we have developed a detailed approach to help manage these patients optimally. The aim of this article is to provide the reader with a systematic algorithm that can be used when consulting a patient with eye problems associated with facial palsy. PMID:25656336
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. PMID:25880524
NASA Astrophysics Data System (ADS)
Ahangari, Zahra
2016-02-01
This paper explores the impact of indium mole fraction on the electrical characteristic of In x Ga1- x As double-gate Schottky MOSFET (SBFET) in nanoscale regime. A 20-band sp 3 d 5 s * tight-binding formalism is applied to compute the bandstructure of ultra-thin body structure as a function of indium mole fraction. The injection velocity of carriers is increased as the indium mole fraction approaches to x = 1. Quantum confinement results in an increment of the effective Schottky barrier height especially for the increased values of indium mole fraction. The ultra-scaled In x Ga1- x As SBFET suffers from a low conduction band DOS in the Γ valley that results in serious degradation of the gate capacitance. The electrical characteristic of this device is considered by solving self-consistent 2D Schrődinger-Poisson equations based on non-equilibrium Green's function formalism. For channel thicknesses where the effect of quantum confinement on the gate capacitance is not dominant, shrinking the channel thickness besides increasing the indium mole fraction improves the electrical characteristic of the device. However, for the ultra-scaled structure, the indium mole fraction enhancement degrades the device performance due to the enhanced value of Schottky barrier height and low DOS.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Herrera, Kathleen Kate
In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo
Modelling Aṣṭādhyāyī: An Approach Based on the Methodology of Ancillary Disciplines (Vedāṅga)
NASA Astrophysics Data System (ADS)
Mishra, Anand
This article proposes a general model based on the common methodological approach of the ancillary disciplines (Vedāṅga) associated with the Vedas taking examples from Śikṣā, Chandas, Vyākaraṇa and Prātiśā khya texts. It develops and elaborates this model further to represent the contents and processes of Aṣṭādhyāyī. Certain key features are added to my earlier modelling of Pāṇinian system of Sanskrit grammar. This includes broader coverage of the Pāṇinian meta-language, mechanism for automatic application of rules and positioning the grammatical system within the procedural complexes of ancillary disciplines.
NASA Astrophysics Data System (ADS)
Borovkov, Alexei I.; Misnik, Yuri Y.
1999-05-01
This paper presents new approach to the fracture analysis of laminated composite structures (laminates). The first part of the paper is devoted to the general algorithm, which allows to obtain critical stresses for any structure considering only the strip made from the same laminate. The algorithm is based on the computation of the energy release rates for all three crack modes and allows to obtain macro-failure parameters such as critical stresses through the micro-fracture characteristics. The developed algorithm is also based on the locality principle in mechanics of composite structures and sequential heterogenization method. The algorithm can be applied both for classical models of laminates with homogenous layers and new 3D finite element (FE) models of interfacial cracks in multidirectional composite structures. The results of multilevel, multimodel and multivariant analysis of 3D delamination problems with detailed microstructure in the crack tip zone are presented.
Efficiently hiding sensitive itemsets with transaction deletion based on genetic algorithms.
Lin, Chun-Wei; Zhang, Binbin; Yang, Kuo-Tung; Hong, Tzung-Pei
2014-01-01
Data mining is used to mine meaningful and useful information or knowledge from a very large database. Some secure or private information can be discovered by data mining techniques, thus resulting in an inherent risk of threats to privacy. Privacy-preserving data mining (PPDM) has thus arisen in recent years to sanitize the original database for hiding sensitive information, which can be concerned as an NP-hard problem in sanitization process. In this paper, a compact prelarge GA-based (cpGA2DT) algorithm to delete transactions for hiding sensitive itemsets is thus proposed. It solves the limitations of the evolutionary process by adopting both the compact GA-based (cGA) mechanism and the prelarge concept. A flexible fitness function with three adjustable weights is thus designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets with minimal side effects of hiding failure, missing cost, and artificial cost. Experiments are conducted to show the performance of the proposed cpGA2DT algorithm compared to the simple GA-based (sGA2DT) algorithm and the greedy approach in terms of execution time and three side effects. PMID:25254239
Actuator Placement Via Genetic Algorithm for Aircraft Morphing
NASA Technical Reports Server (NTRS)
Crossley, William A.; Cook, Andrea M.
2001-01-01
This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-05-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, or retrieving relevant data for the user.
Ban, Hiroshi; Yamamoto, Hiroki
2013-01-01
In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free. PMID:23729771
Wee, Aileen
2005-01-01
The role of fine needle aspiration biopsy (FNAB) in the evaluation of focal liver lesions has evolved. Guided FNAB is still useful to procure a tissue diagnosis if clinical, biochemical and radiologic findings are inconclusive. Major diagnostic issues include: (i) Distinction of benign hepatocellular nodular lesions from reactive hepatocytes, (ii) Distinction of well-differentiated hepatocellular carcinoma (WD-HCC) from benign hepatocellular nodular lesions, (iii) Distinction of poorly differentiated HCC from cholangiocarcinoma and metastatic carcinomas, (iv) Determination of histogenesis of malignant tumor, and (v) Determination of primary site of origin of malignant tumor. This review gives a general overview of hepatic FNAB; outlines an algorithmic approach to cytodiagnosis with emphasis on HCC, its variants and their mimics; and addresses current diagnostic issues. Close radiologic surveillance of high-risk cirrhotic patients has resulted in the increasing detection of smaller lesions with many subjected to biopsy for tissue characterization. The need for tissue confirmation in clinically obvious HCC is questioned due to risk of malignant seeding. When a biopsy is indicated, core needle biopsy is favored over FNAB. The inherent difficulty of distinguishing small/early HCC from benign hepatocellular nodular lesions has resulted in indeterminate reports. Changing concepts in the understanding of the biological behavior and morphologic evolution of HCC and its precursors; and the current lack of agreement on the morphologic criteria for distinguishing high-grade dysplastic lesions (with small cell change) from WD-HCC, have profound impact on nomenclature, cytohistologic interpretation and management. Optimization of hepatic FNAB to enhance the yield and accuracy of diagnoses requires close clinicopathologic correlation; combined cytohistologic approach; judicious use of ancillary tests; and skilled healthcare teams. PMID:15941489
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Scheduling Jobs with Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ferrolho, António; Crisóstomo, Manuel
Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.
Composite droplets: evolution of InGa and AlGa alloys on GaAs(100).
Sablon, K A; Wang, Zh M; Salamo, G J
2008-03-26
We present a comparative study for the evolution of utilizing indium gallium (InGa) and aluminum gallium (AlGa) alloys fabricated on GaAs(100) by means of simultaneous and sequential droplet formation. The composite alloys reported using the sequential approach lack the ability to precisely determine the final alloy composition as well as consistency in the density of the droplets. Further, the composition of the InGa alloy is not uniform, as seen by the size distribution using an atomic force microscope (AFM). Although this approach may be acceptable for materials with similar surface kinetics, as in the case of AlGa, it is not acceptable for InGa. This investigation reveals that the simultaneous approach for fabricating composite alloys is the optimum approach for producing InGa alloys with better control on composition for plasmonic applications such as plasmonic waveguides. PMID:21817741
Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R
2013-09-01
Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms. PMID:23759251
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031
Multidisciplinary design optimization using genetic algorithms
NASA Astrophysics Data System (ADS)
Unal, Resit
1994-12-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
NASA Astrophysics Data System (ADS)
Sharma, R. K.; Anil Kumar, A. K.; Xavier James Raj, M.
strategy (Sharma & Anilkumar 2003) adopted for the re-entry prediction of the risk objects by estimating the ballistic coefficient based on the TLEs. The estimation of the ballistic coefficient Bn = m/(CDA), where CD is the drag coefficient, A is the reference area, and m is the mass of the object, is done by minimizing a cost function (variation in the re-entry prediction time) using Genetic algorithm. The KS element equations of motion are numerically integrated with a suitable integration step size with the 4th - order Runge-Kutta-Gill method till the end of the orbital life, by including the Earth's oblateness with J2 to J6 terms, and modelling the air drag forces through an analytical oblate diurnal atmosphere with the density scale height varying with altitude. Jacchia (1977) atmospheric model, which takes into consideration the epoch, daily solar flux (F10.7) and geomagnetic index (Ap) for computation of density and density scale height, is utilized. The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and hence the estimated Bn is not the actual ballistic coefficient but an effective ballistic coefficient. It is demonstrated that the inaccuracies or deficiencies in the inputs, like F10.7 and Ap values, are absorbed in the estimated Bn. The details of the re-entry results based on this approach, utilizing the TLE of debris objects, US Sat No. 25947 and SROSS-C2 Satellite, which re-entered the Earth's atmosphere on 4th March 2000 and 12th July 2001, are provided. Details of the re-entry predictions with respect to the 4th and 5th IADC re-entry campaigns, related to COSMOS 1043 rocket body and COSMOS 389 satellite, which re-entered the Earth's atmosphere on 19 January 2002 and 24 November 2003, respectively, are described. The predicted re-entries were found to be all along quite close to the actual re-entry time, with quite less uncertainties bands on the predictions. A
Phase Reconstruction from FROG Using Genetic Algorithms[Frequency-Resolved Optical Gating
Omenetto, F.G.; Nicholson, J.W.; Funk, D.J.; Taylor, A.J.
1999-04-12
The authors describe a new technique for obtaining the phase and electric field from FROG measurements using genetic algorithms. Frequency-Resolved Optical Gating (FROG) has gained prominence as a technique for characterizing ultrashort pulses. FROG consists of a spectrally resolved autocorrelation of the pulse to be measured. Typically a combination of iterative algorithms is used, applying constraints from experimental data, and alternating between the time and frequency domain, in order to retrieve an optical pulse. The authors have developed a new approach to retrieving the intensity and phase from FROG data using a genetic algorithm (GA). A GA is a general parallel search technique that operates on a population of potential solutions simultaneously. Operators in a genetic algorithm, such as crossover, selection, and mutation are based on ideas taken from evolution.
NASA Astrophysics Data System (ADS)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.; Kern, S.; Heygster, G.; Lavergne, T.; Sørensen, A.; Saldo, R.; Dybkjær, G.; Brucker, L.; Shokr, M.
2015-09-01
Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds, and sensitivity to error sources with seasonal to inter-annual variations and potential climatic trends, such as atmospheric water vapour and water-surface roughening by wind. A selection of 13 algorithms is shown in the article to demonstrate the results. Based on the findings, a hybrid approach is suggested to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity to the mentioned error sources.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Genetic Algorithm to minimize flowtime in a no-wait flowshop scheduling problem
NASA Astrophysics Data System (ADS)
Chaudhry, Imran A.; Ahmed, Riaz; Munem Khan, Abdul
2014-07-01
No-wait flowshop is an important scheduling environment having application in many industries. This paper addresses a no-wait flowshop scheduling problem, where the objective function is to minimise total flowtime. A Genetic Algorithm (GA) optimization approach implemented in a spreadsheet environment is suggested to solve this important class of problem. The proposed algorithm employs a general purpose genetic algorithm which can be customised with ease to address any objective function without modifying the optimization routine. Performance of the proposed approach is compared with eight previously reported algorithms for two sets of benchmark problems. Experimental analysis shows that the performance of the suggested approach is comparable with earlier approaches in terms of quality of solution.
Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M
2016-09-01
Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. PMID:27251794
A guided search genetic algorithm using mined rules for optimal affective product design
NASA Astrophysics Data System (ADS)
Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.
2014-08-01
Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.
NASA Astrophysics Data System (ADS)
Lin, Tian Ran; Kim, Eric; Tan, Andy C. C.
2013-04-01
A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using the existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.
An Intelligent Model for Pairs Trading Using Genetic Algorithms
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236
An Intelligent Model for Pairs Trading Using Genetic Algorithms.
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
NASA Astrophysics Data System (ADS)
Zhou, Mandi; Shu, Jiong; Chen, Zhigang; Ji, Minhe
2012-11-01
Hyperspectral imagery has been widely used in terrain classification for its high resolution. Urban vegetation, known as an essential part of the urban ecosystem, can be difficult to discern due to high similarity of spectral signatures among some land-cover classes. In this paper, we investigate a hybrid approach of the genetic-algorithm tuned fuzzy support vector machine (GA-FSVM) technique and apply it to urban vegetation classification from aerial hyperspectral urban imagery. The approach adopts the genetic algorithm to optimize parameters of support vector machine, and employs the K-nearest neighbor algorithm to calculate the membership function for each fuzzy parameter, aiming to reduce the effects of the isolated and noisy samples. Test data come from push-broom hyperspectral imager (PHI) hyperspectral remote sensing image which partially covers a corner of the Shanghai World Exposition Park, while PHI is a hyper-spectral sensor developed by Shanghai Institute of Technical Physics. Experimental results show the GA-FSVM model generates overall accuracy of 71.2%, outperforming the maximum likelihood classifier with 49.4% accuracy and the artificial neural network method with 60.8% accuracy. It indicates GA-FSVM is a promising model for vegetation classification from hyperspectral urban data, and has good advantage in the application of classification involving abundant mixed pixels and small samples problem.
Dongarra, J.J.; Hewitt, T.
1985-08-01
This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.
Gurkiewicz, Meron; Korngreen, Alon
2007-01-01
The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission. The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis. Yet, the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them. Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy. Previously, ion channel stimulation traces were analyzed one at a time, the results of these analyses being combined to produce a picture of channel kinetics. Here the entire set of traces from all stimulation protocols are analysed simultaneously. The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley–like and Markov chain models of voltage-gated potassium and sodium channels. Currents were also produced by simulating levels of noise expected from actual patch recordings. Finally, the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration. The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level. PMID:17784781
NASA Astrophysics Data System (ADS)
Chen, Junting; Lau, Vincent K. N.
2013-01-01
Max weighted queue (MWQ) control policy is a widely used cross-layer control policy that achieves queue stability and a reasonable delay performance. In most of the existing literature, it is assumed that optimal MWQ policy can be obtained instantaneously at every time slot. However, this assumption may be unrealistic in time varying wireless systems, especially when there is no closed-form MWQ solution and iterative algorithms have to be applied to obtain the optimal solution. This paper investigates the convergence behavior and the queue delay performance of the conventional MWQ iterations in which the channel state information (CSI) and queue state information (QSI) are changing in a similar timescale as the algorithm iterations. Our results are established by studying the stochastic stability of an equivalent virtual stochastic dynamic system (VSDS), and an extended Foster-Lyapunov criteria is applied for the stability analysis. We derive a closed form delay bound of the wireless network in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm with compensation to improve the tracking performance. We demonstrate that under some mild conditions, the proposed modified MWQ algorithm converges to the optimal MWQ control despite the time-varying CSI and QSI.
NASA Astrophysics Data System (ADS)
Chen, Fang; Chang, Honglong; Yuan, Weizheng; Wilcock, Reuben; Kraft, Michael
2012-10-01
This paper describes a novel multiobjective parameter optimization method based on a genetic algorithm (GA) for the design of a sixth-order continuous-time, force feedback band-pass sigma-delta modulator (BP-ΣΔM) interface for the sense mode of a MEMS gyroscope. The design procedure starts by deriving a parameterized Simulink model of the BP-ΣΔM gyroscope interface. The system parameters are then optimized by the GA. Consequently, the optimized design is tested for robustness by a Monte Carlo analysis to find a solution that is both optimal and robust. System level simulations result in a signal-to-noise ratio (SNR) larger than 90 dB in a bandwidth of 64 Hz with a 200° s-1 angular rate input signal; the noise floor is about -100 dBV Hz-1/2. The simulations are compared to measured data from a hardware implementation. For zero input rotation with the gyroscope operating at atmospheric pressure, the spectrum of the output bitstream shows an obvious band-pass noise shaping and a deep notch at the gyroscope resonant frequency. The noise floor of measured power spectral density (PSD) of the output bitstream agrees well with simulation of the optimized system level model. The bias stability, rate sensitivity and nonlinearity of the gyroscope controlled by an optimized BP-ΣΔM closed-loop interface are 34.15° h-1, 22.3 mV °-1 s-1, 98 ppm, respectively. This compares to a simple open-loop interface for which the corresponding values are 89° h-1, 14.3 mV °-1 s-1, 7600 ppm, and a nonoptimized BP-ΣΔM closed-loop interface with corresponding values of 60° h-1, 17 mV °-1 s-1, 200 ppm.
Bush, Keith; Cisler, Josh
2013-01-01
Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664
Lee, Ming-Lun; Yeh, Yu-Hsiang; Tu, Shang-Ju; Chen, P C; Lai, Wei-Chih; Sheu, Jinn-Kong
2015-04-01
Non-planar InGaN/GaN multiple quantum well (MQW) structures are grown on a GaN template with truncated hexagonal pyramids (THPs) featuring c-plane and r-plane surfaces. The THP array is formed by the regrowth of the GaN layer on a selective-area Si-implanted GaN template. Transmission electron microscopy shows that the InGaN/GaN epitaxial layers regrown on the THPs exhibit different growth rates and indium compositions of the InGaN layer between the c-plane and r-plane surfaces. Consequently, InGaN/GaN MQW light-emitting diodes grown on the GaN THP array emit multiple wavelengths approaching near white light. PMID:25968805
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Allocating Railway Platforms Using A Genetic Algorithm
NASA Astrophysics Data System (ADS)
Clarke, M.; Hinde, C. J.; Withall, M. S.; Jackson, T. W.; Phillips, I. W.; Brown, S.; Watson, R.
This paper describes an approach to automating railway station platform allocation. The system uses a Genetic Algorithm (GA) to find how a station’s resources should be allocated. Real data is used which needs to be transformed to be suitable for the automated system. Successful or ‘fit’ allocations provide a solution that meets the needs of the station schedule including platform re-occupation and various other constraints. The system associates the train data to derive the station requirements. The Genetic Algorithm is used to derive platform allocations. Finally, the system may be extended to take into account how further parameters that are external to the station have an effect on how an allocation should be applied. The system successfully allocates around 1000 trains to platforms in around 30 seconds requiring a genome of around 1000 genes to achieve this.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
An investigation of messy genetic algorithms
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley
1990-01-01
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.
Use of Algorithm of Changes for Optimal Design of Heat Exchanger
NASA Astrophysics Data System (ADS)
Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.
2010-05-01
For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.
NASA Astrophysics Data System (ADS)
Timoshenko, Janis; Anspoks, Andris; Kalinko, Aleksandr; Kuzmin, Alexei
2016-05-01
Extended x-ray absorption fine structure (EXAFS) spectroscopy combined with reverse Monte Carlo (RMC) and evolutionary algorithm (EA) modelling is used to advance the understanding of the local structure and lattice dynamics of copper nitride (Cu3N). The RMC/EA-EXAFS method provides a possibility to probe correlations in the motion of neighboring atoms and allows us to analyze the influence of anisotropic motion of copper atoms in Cu3N.
Thermoluminescence curves simulation using genetic algorithm with factorial design
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-05-01
The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.
Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection
Offman, Marc N; Tournier, Alexander L; Bates, Paul A
2008-01-01
Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557
NASA Astrophysics Data System (ADS)
Azam, Sikander; Khan, Saleem Ayaz; Goumri-Said, Souraya
2016-01-01
Metal chalcogenide semiconductors have a significant role in the development of materials for energy and nanotechnology applications. First principle calculations were applied on CsAgGa2Se4 to investigate its optoelectronic structure and bonding characteristics, using the full-potential linear augmented plane wave method within the framework of generalized gradient approximations (GGA) and Engel-Vosko GGA functionals (EV-GGA). The band structure from EV-GGA shows that the valence band maximum and conduction band minimum are situated at Γ with a band gap value of 2.15 eV. A mixture of orbitals from Ag 4 p 6/4 d 10, Se 3 d 10, Ga 4 p 1, Se 4 p 4 , and Ga 4 s 2 states have a primary role to lead to a semiconducting character of the present chalcogenide. The charge density iso-surface shows a strong covalent bonding between Ag-Se and Ga-Se atoms. The imaginary part of dielectric constant reveals that the threshold (first optical critical point) energy of dielectric function occurs 2.15 eV. It is obvious that with a direct large band gap and large absorption coefficient, CsAgGa2Se4 might be considered a potential material for photovoltaic applications.
Ru, Xiao; Song, Ce; Lin, Zijing
2016-05-15
The genetic algorithm (GA) is an intelligent approach for finding minima in a highly dimensional parametric space. However, the success of GA searches for low energy conformations of biomolecules is rather limited so far. Herein an improved GA scheme is proposed for the conformational search of oligopeptides. A systematic analysis of the backbone dihedral angles of conformations of amino acids (AAs) and dipeptides is performed. The structural information is used to design a new encoding scheme to improve the efficiency of GA search. Local geometry optimizations based on the energy calculations by the density functional theory are employed to safeguard the quality and reliability of the GA structures. The GA scheme is applied to the conformational searches of Lys, Arg, Met-Gly, Lys-Gly, and Phe-Gly-Gly representative of AAs, dipeptides, and tripeptides with complicated side chains. Comparison with the best literature results shows that the new GA method is both highly efficient and reliable by providing the most complete set of the low energy conformations. Moreover, the computational cost of the GA method increases only moderately with the complexity of the molecule. The GA scheme is valuable for the study of the conformations and properties of oligopeptides. © 2016 Wiley Periodicals, Inc. PMID:26833761
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search
Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali
2006-03-01
An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-12-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Edge detection based on genetic algorithm and sobel operator in image
NASA Astrophysics Data System (ADS)
Tong, Xin; Ren, Aifeng; Zhang, Haifeng; Ruan, Hang; Luo, Ming
2011-10-01
Genetic algorithm (GA) is widely used as the optimization problems using techniques inspired by natural evolution. In this paper we present a new edge detection technique based on GA and sobel operator. The sobel edge detection built in DSP Builder is first used to determine the boundaries of objects within an image. Then the genetic algorithm using SOPC Builder proposes a new threshold algorithm for the image processing. Finally, the performance of the new edge detection technique-based the best threshold approaches in DSP Builder and Quartus II software is compared both qualitatively and quantitatively with the single sobel operator. The new edge detection technique is shown to perform very well in terms of robustness to noise, edge search capability and quality of the final edge image.
Edge detection in medical images using a genetic algorithm.
Gudmundsson, M; El-Kwae, E A; Kabuka, M R
1998-06-01
An algorithm is developed that detects well-localized, unfragmented, thin edges in medical images based on optimization of edge configurations using a genetic algorithm (GA). Several enhancements were added to improve the performance of the algorithm over a traditional GA. The edge map is split into connected subregions to reduce the solution space and simplify the problem. The edge-map is then optimized in parallel using incorporated genetic operators that perform transforms on edge structures. Adaptation is used to control operator probabilities based on their participation. The GA was compared to the simulated annealing (SA) approach using ideal and actual medical images from different modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound. Quantitative comparisons were provided based on the Pratt figure of merit and on the cost-function minimization. The detected edges were thin, continuous, and well localized. Most of the basic edge features were detected. Results for different medical image modalities are promising and encourage further investigation to improve the accuracy and experiment with different cost functions and genetic operators. PMID:9735910
Maher, Janae L; Mahabir, Raman C; Roehl, Kendall R
2015-08-01
The number one cause of death in American women is heart disease. Studies have clearly shown the superiority of internal mammary artery (IMA) grafts for coronary revascularization over other conduits or intracoronary techniques. Our goal was to design an algorithm for recipient vessel selection in patients undergoing free tissue transfer breast reconstruction.A review of the literature was performed to identify potential evidence to contribute to a best-practice guideline. The lack of high-level evidence led us to create a guideline based on a workgroup consensus, expert opinion, cadaveric studies, and case reports.As we operate on older patient populations, the need for IMA use for coronary artery bypass grafting (CABG) after autologous breast reconstruction may arise more frequently. We discuss the current literature regarding recipient vessel choices and level of recipient vessel harvest in free flap breast reconstruction to help continually evolve the practices of our specialty to the potential future needs of our patients. We also present a best-practice decision algorithm for vessel selection and harvest, as well as a sample case of CABG using the left IMA 35 days after previous autologous breast reconstruction using the left IMA.As the number of patients we operate on who may later require their IMA for CABG increases, so too must our understanding of the implications of our selection of recipient vessels for free autologous breast reconstruction. PMID:26165568
Kumar, Atul; Sharmila, D Jeya Sundara
2016-06-01
Even after so much advancement in gene expression microarray technology, the main hindrance in analyzing microarray data is its limited number of samples as compared to a number of factors, which is a major impediment in revealing actual gene functionality and valuable information from the data. Analyzing gene expression data can indicate the factors which are differentially expressed in the diseased tissue. As most of these genes have no part to play in causing the disease of interest, thus, identification of disease-causing genes can reveal not just the case of the disease, but also its pathogenic mechanism. There are a lot of gene selection methods available which have the capacity to remove irrelevant genes, but most of them are not sufficient enough in removing redundancy in genes from microarray data, which increases the computational cost and decreases the classification accuracy. Combining the gene expression data with the gene ontology information can be helpful in determining the redundancy which can then be removed using the algorithm mentioned in the work. The gene list obtained after these sequential steps of the algorithm can be analyzed further to obtain the most deterministic genes responsible for type 2 diabetes. PMID:26289404
2014-01-01
This paper aims to present an experimental investigation for optimum tribological behavior (wear depth and coefficient of friction) of electroless Ni-P-Cu coatings based on four process parameters using artificial bee colony algorithm. Experiments are carried out by utilizing the combination of three coating process parameters, namely, nickel sulphate, sodium hypophosphite, and copper sulphate, and the fourth parameter is postdeposition heat treatment temperature. The design of experiment is based on the Taguchi L27 experimental design. After coating, measurement of wear and coefficient of friction of each heat-treated sample is done using a multitribotester apparatus with block-on-roller arrangement. Both friction and wear are found to increase with increase of source of nickel concentration and decrease with increase of source of copper concentration. Artificial bee colony algorithm is successfully employed to optimize the multiresponse objective function for both wear depth and coefficient of friction. It is found that, within the operating range, a lower value of nickel concentration, medium value of hypophosphite concentration, higher value of copper concentration, and higher value of heat treatment temperature are suitable for having minimum wear and coefficient of friction. The surface morphology, phase transformation behavior, and composition of coatings are also studied with the help of scanning electron microscopy, X-ray diffraction analysis, and energy dispersed X-ray analysis, respectively. PMID:27382630
NASA Astrophysics Data System (ADS)
Boninsegni, M.; Prokof'Ev, N. V.; Svistunov, B. V.
2006-09-01
A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature. The algorithm is formulated within the general path integral Monte Carlo (PIMC) scheme, but also allows one to perform quantum simulations in the grand canonical ensemble, as well as to compute off-diagonal imaginary-time correlation functions, such as the Matsubara Green function, simultaneously with diagonal observables. Another important innovation consists of the expansion of the attractive part of the pairwise potential energy into elementary (diagrammatic) contributions, which are then statistically sampled. This affords a complete microscopic account of the long-range part of the potential energy, while keeping the computational complexity of all updates independent of the size of the simulated system. The computational scheme allows for efficient calculations of the superfluid fraction and off-diagonal correlations in space-time, for system sizes which are orders of magnitude larger than those accessible to conventional PIMC. We present illustrative results for the superfluid transition in bulk liquid He4 in two and three dimensions, as well as the calculation of the chemical potential of hcp He4 .
Roy, Supriyo; Sahoo, Prasanta
2014-01-01
This paper aims to present an experimental investigation for optimum tribological behavior (wear depth and coefficient of friction) of electroless Ni-P-Cu coatings based on four process parameters using artificial bee colony algorithm. Experiments are carried out by utilizing the combination of three coating process parameters, namely, nickel sulphate, sodium hypophosphite, and copper sulphate, and the fourth parameter is postdeposition heat treatment temperature. The design of experiment is based on the Taguchi L27 experimental design. After coating, measurement of wear and coefficient of friction of each heat-treated sample is done using a multitribotester apparatus with block-on-roller arrangement. Both friction and wear are found to increase with increase of source of nickel concentration and decrease with increase of source of copper concentration. Artificial bee colony algorithm is successfully employed to optimize the multiresponse objective function for both wear depth and coefficient of friction. It is found that, within the operating range, a lower value of nickel concentration, medium value of hypophosphite concentration, higher value of copper concentration, and higher value of heat treatment temperature are suitable for having minimum wear and coefficient of friction. The surface morphology, phase transformation behavior, and composition of coatings are also studied with the help of scanning electron microscopy, X-ray diffraction analysis, and energy dispersed X-ray analysis, respectively. PMID:27382630
Bouc-Wen model parameter identification for a MR fluid damper using computationally efficient GA.
Kwok, N M; Ha, Q P; Nguyen, M T; Li, J; Samali, B
2007-04-01
A non-symmetrical Bouc-Wen model is proposed in this paper for magnetorheological (MR) fluid dampers. The model considers the effect of non-symmetrical hysteresis which has not been taken into account in the original Bouc-Wen model. The model parameters are identified with a Genetic Algorithm (GA) using its flexibility in identification of complex dynamics. The computational efficiency of the proposed GA is improved with the absorption of the selection stage into the crossover and mutation operations. Crossover and mutation are also made adaptive to the fitness values such that their probabilities need not be user-specified. Instead of using a sufficiently number of generations or a pre-determined fitness value, the algorithm termination criterion is formulated on the basis of a statistical hypothesis test, thus enhancing the performance of the parameter identification. Experimental test data of the damper displacement and force are used to verify the proposed approach with satisfactory parameter identification results. PMID:17349644
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
Applying genetic algorithms to the state assignment problem: a case study
NASA Astrophysics Data System (ADS)
Amaral, Jose N.; Tumer, Kagan; Ghosh, Joydeep
1992-08-01
Finding the best state assignment for implementing a synchronous sequential circuit is important for reducing silicon area or chip count in many digital designs. This State Assignment Problem (SAP) belongs to a broader class of combinatorial optimization problems than the well studied traveling salesman problem, which can be formulated as a special case of SAP. The search for a good solution is considerably more involved for the SAP than it is for the traveling salesman problem due to a much larger number of equivalent solutions, and no effective heuristic has been found so far to cater to all types of circuits. In this paper, a matrix representation is used as the genotype for a Generic Algorithm (GA) approach to this problem. A novel selection mechanism is introduced, and suitable genetic operators for crossover and mutation, are constructed. The properties of each of these elements of the GA are discussed and an analysis of parameters that influence the algorithm is given. A canonical form for a solution is defined to significantly reduce the search space and number of local minima. Simulation results for scalable examples show that the GA approach yields results that are comparable to those obtained using competing heuristics. Although a GA does not seem to be the tool of choice for use in a sequential Von-Neumann machine, the results obtained are good enough to encourage further research on distributed processing GA machines that can exploit its intrinsic parallelism.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
NASA Astrophysics Data System (ADS)
Chapman, Alexander Lloyd
Recently, a sound source identification technique called CRAFT was developed as an advance in the state of the art in inverse noise problems. It addressed some limitations associated with nearfield acoustic holography and a few of the issues with inverse boundary element method. This work centers on two critical issues associated with the CRAFT algorithm. Although CRAFT employs the complete general solution associated with the Helmholtz equation, the approach taken to derive those equations results in computational inefficiency when implemented numerically. In this work, a mathematical approach to derivation of the basis equations results in a doubling in efficiency. This formulation of CRAFT is termed general Helmholtz equation, least-squares method (GEN-HELS). Additionally, the numerous singular points present in the gradient of the basis functions are shown here to resolve to finite limits. As a realistic test case, a diesel engine surface pressure and velocity are reconstructed to show the increase in efficiency from CRAFT to GEN-HELS. Keywords: Inverse Numerical Acoustics, Acoustic Holography, Helmholtz Equation, HELS Method, CRAFT Algorithm.
GaAsP solar cells on GaP/Si with low threading dislocation density
NASA Astrophysics Data System (ADS)
Yaung, Kevin Nay; Vaisman, Michelle; Lang, Jordan; Lee, Minjoo Larry
2016-07-01
GaAsP on Si tandem cells represent a promising path towards achieving high efficiency while leveraging the Si solar knowledge base and low-cost infrastructure. However, dislocation densities exceeding 108 cm-2 in GaAsP cells on Si have historically hampered the efficiency of such approaches. Here, we report the achievement of low threading dislocation density values of 4.0-4.6 × 106 cm-2 in GaAsP solar cells on GaP/Si, comparable with more established metamorphic solar cells on GaAs. Our GaAsP solar cells on GaP/Si exhibit high open-circuit voltage and quantum efficiency, allowing them to significantly surpass the power conversion efficiency of previous devices. The results in this work show a realistic path towards dual-junction GaAsP on Si cells with efficiencies exceeding 30%.
Improved interpretation of satellite altimeter data using genetic algorithms
NASA Technical Reports Server (NTRS)
Messa, Kenneth; Lybanon, Matthew
1992-01-01
Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.
The Scalar Relativistic Contribution to Ga-Halide Bond Energies
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1998-01-01
The one-electron Douglas Kroll (DK) and perturbation theory (+R) approaches are used to compute the scalar relativistic contribution to the atomization energies of GaFn. These results are compared with the previous GaCln results. While the +R and DK results agree well for the GaCln atom nation energies, they differ for GaFn. The present work suggests that the DK approach is more accurate than the +R approach. In addition, the DK approach is less sensitive to the choice of basis set. The computed atomization energies of GaF2 and GaF3 are smaller than the somewhat uncertain experiments. It is suggested that additional calibration calculations for the scalar relativistic effects in GaF2 and GaF3 would be valuable.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
Computing Algorithms for Nuffield Advanced Physics.
ERIC Educational Resources Information Center
Summers, M. K.
1978-01-01
Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)
Optimal groundwater remediation using artificial neural networks and the genetic algorithm
Rogers, L.L.
1992-08-01
An innovative computational approach for the optimization of groundwater remediation is presented which uses artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict an aspect of the outcome of a flow and transport simulation. Then the GA searches through realizations or patterns of pumping and uses the trained network to predict the outcome of the realizations. This approach has advantages of parallel processing of the groundwater simulations and the ability to ``recycle`` or reuse the base of knowledge formed by these simulations. These advantages offer reduction of computational burden of the groundwater simulations relative to a more conventional approach which uses nonlinear programming (NLP) with a quasi-newtonian search. Also the modular nature of this approach facilitates substitution of different groundwater simulation models.
Ligand "Brackets" for Ga-Ga Bond.
Fedushkin, Igor L; Skatova, Alexandra A; Dodonov, Vladimir A; Yang, Xiao-Juan; Chudakova, Valentina A; Piskunov, Alexander V; Demeshko, Serhiy; Baranov, Evgeny V
2016-09-01
The reactivity of digallane (dpp-Bian)Ga-Ga(dpp-Bian) (1) (dpp-Bian = 1,2-bis[(2,6-diisopropylphenyl)imino]acenaphthene) toward acenaphthenequinone (AcQ), sulfur dioxide, and azobenzene was investigated. The reaction of 1 with AcQ in 1:1 molar ratio proceeds via two-electron reduction of AcQ to give (dpp-Bian)Ga(μ2-AcQ)Ga(dpp-Bian) (2), in which diolate [AcQ](2-) acts as "bracket" for the Ga-Ga bond. The interaction of 1 with AcQ in 1:2 molar ratio proceeds with an oxidation of the both dpp-Bian ligands as well as of the Ga-Ga bond to give (dpp-Bian)Ga(μ2-AcQ)2Ga(dpp-Bian) (3). At 330 K in toluene complex 2 decomposes to give compounds 3 and 1. The reaction of complex 2 with atmospheric oxygen results in oxidation of a Ga-Ga bond and affords (dpp-Bian)Ga(μ2-AcQ)(μ2-O)Ga(dpp-Bian) (4). The reaction of digallane 1 with SO2 produces, depending on the ratio (1:2 or 1:4), dithionites (dpp-Bian)Ga(μ2-O2S-SO2)Ga(dpp-Bian) (5) and (dpp-Bian)Ga(μ2-O2S-SO2)2Ga(dpp-Bian) (6). In compound 5 the Ga-Ga bond is preserved and supported by dithionite dianionic bracket. In compound 6 the gallium centers are bridged by two dithionite ligands. Both 5 and 6 consist of dpp-Bian radical anionic ligands. Four-electron reduction of azobenzene with 1 mol equiv of digallane 1 leads to complex (dpp-Bian)Ga(μ2-NPh)2Ga(dpp-Bian) (7). Paramagnetic compounds 2-7 were characterized by electron spin resonance spectroscopy, and their molecular structures were established by single-crystal X-ray analysis. Magnetic behavior of compounds 2, 5, and 6 was investigated by superconducting quantum interference device technique in the range of 2-295 K. PMID:27548713
NASA Astrophysics Data System (ADS)
Bauer, A.; Bowring, S. A.; Vervoort, J. D.; Fisher, C. M.
2014-12-01
The Acasta Gneiss Complex (AGC) of northwestern Canada preserves some of Earth's oldest granitic crust (>4.03 Ga) and thereby contains important insight into crust forming processes on the early Earth. In general, rocks of the AGC have undergone a complex history of metamorphism and deformation (Archean and Paleoproterozoic)1,2, and, as a consequence, the zircons retain a complex history including inheritance, magmatic and metamorphic overgrowths, recrystallization, and multi-stage Pb loss. Previously published Hf isotopic data on zircons show within sample variability in excess of analytical uncertainty2,3,4. In order to assess the meaning and significance of this apparent isotopic variability, we are using two different methods to obtain coupled U-Pb and Lu-Hf isotopic data in zircon from a suite of rocks ranging in age from ca. > 3.9 Ga to 3.3 Ga. To obtain these data from the same volume of zircon, our approach involves: 1) split stream LA-ICPMS for U-Pb and Lu-Hf; 2) mechanical isolation of zircon domains for chemical abrasion and ID-TIMS U-Pb analyses and solution ICPMS for Lu-Hf recovered from U-Pb ion exchange chromatography. The deconvolution of complex histories requires this integrated approach and permits us to take advantage of both high spatial resolution and highest precision measurements to ultimately decipher the age and isotopic composition of discrete domains of multi-phase zircon. We demonstrate our approach with both relatively simple and complex grain populations in an attempt to understand within and between grain heterogeneity. The samples with the simplest zircon systematics have increasingly negative ɛHf from oldest to youngest, consistent with involvement of 4.0 Ga or older crust in later generations; also, none of our samples have been derived solely from strongly depleted sources. The presence of intra-zircon variability within samples from the AGC reflects a complex history of magmatic additions requiring melting/assimilation of older
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Modeling a magnetostrictive transducer using genetic algorithm
NASA Astrophysics Data System (ADS)
Almeida, L. A. L.; Deep, G. S.; Lima, A. M. N.; Neff, H.
2001-05-01
This work reports on the applicability of the genetic algorithm (GA) to the problem of parameter determination of magnetostrictive transducers. A combination of the Jiles-Atherton hysteresis model with a quadratic moment rotation model is simulated using known parameters of a sensor. The simulated sensor data are then used as input data for the GA parameter calculation method. Taking the previously known parameters, the accuracy of the GA parameter calculation method can be evaluated.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Pereira, Keith; Osiason, Adam; Salsamendi, Jason
2015-01-01
The role of interventional radiology in the overall management of patients on dialysis continues to expand. In patients with end-stage renal disease (ESRD), the use of tunneled dialysis catheters (TDCs) for hemodialysis has become an integral component of treatment plans. Unfortunately, long-term use of TDCs often leads to infections, acute occlusions, and chronic venous stenosis, depletion of the patient's conventional access routes, and prevention of their recanalization. In such situations, the progressive loss of venous access sites prompts a systematic approach to alternative sites to maximize patient survival and minimize complications. In this review, we discuss the advantages and disadvantages of each vascular access option. We illustrate the procedures with case histories and images from our own experience at a highly active dialysis and transplant center. We rank each vascular access option and classify them into tiers based on their relative degrees of effectiveness. The conventional approaches are the most preferred, followed by alternative approaches and finally the salvage approaches. It is our intent to have this review serve as a concise and informative reference for physicians managing patients who need vascular access for hemodialysis. PMID:26167389
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
High efficiency epitaxial GaAs/GaAs and GaAs/Ge solar cell technology using OM/CVD
NASA Technical Reports Server (NTRS)
Wang, K. L.; Yeh, Y. C. M.; Stirn, R. J.; Swerdling, S.
1980-01-01
A technology for fabricating high efficiency, thin film GaAs solar cells on substrates appropriate for space and/or terrestrial applications was developed. The approach adopted utilizes organometallic chemical vapor deposition (OM-CVD) to form a GaAs layer epitaxially on a suitably prepared Ge epi-interlayer deposited on a substrate, especially a light weight silicon substrate which can lead to a 300 watt per kilogram array technology for space. The proposed cell structure is described. The GaAs epilayer growth on single crystal GaAs and Ge wafer substrates were investigated.
Design of a blade stiffened composite panel by a genetic algorithm
NASA Technical Reports Server (NTRS)
Nagendra, S.; Haftka, R. T.; Gurdal, Z.
1993-01-01
Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.
Design of a blade stiffened composite panel by a genetic algorithm
NASA Astrophysics Data System (ADS)
Nagendra, S.; Haftka, R. T.; Gurdal, Z.
1993-04-01
Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.
NASA Astrophysics Data System (ADS)
Izyumskaya, N.; Okur, S.; Zhang, F.; Monavarian, M.; Avrutin, V.; Özgür, Ü.; Metzner, S.; Karbaum, C.; Bertram, F.; Christen, J.; Morkoç, H.
2014-03-01
Nonpolar m-plane GaN layers were grown on patterned Si (112) substrates by metal-organic chemical vapor deposition (MOCVD). A two-step growth procedure involving a low-pressure (30 Torr) first step to ensure formation of the m-plane facet and a high-pressure step (200 Torr) for improvement of optical quality was employed. The layers grown in two steps show improvement of the optical quality: the near-bandedge photoluminescence (PL) intensity is about 3 times higher than that for the layers grown at low pressure, and deep emission is considerably weaker. However, emission intensity from m-GaN is still lower than that of polar and semipolar (1 100 ) reference samples grown under the same conditions. To shed light on this problem, spatial distribution of optical emission over the c+ and c- wings of the nonpolar GaN/Si was studied by spatially resolved cathodoluminescence and near-field scanning optical microscopy.
Feature Subset Selection by Estimation of Distribution Algorithms
Cantu-Paz, E
2002-01-17
This paper describes the application of four evolutionary algorithms to the identification of feature subsets for classification problems. Besides a simple GA, the paper considers three estimation of distribution algorithms (EDAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the EDAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. In contrast with previous studies, we did not find evidence to support or reject the use of EDAs for this problem.
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)
2002-01-01
Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Astrophysics Data System (ADS)
Pahlavani, Parham; Delavar, Mahmoud R.; Frank, Andrew U.
2012-08-01
The personalized urban multi-criteria quasi-optimum path problem (PUMQPP) is a branch of multi-criteria shortest path problems (MSPPs) and it is classified as a NP-hard problem. To solve the PUMQPP, by considering dependent criteria in route selection, there is a need for approaches that achieve the best compromise of possible solutions/routes. Recently, invasive weed optimization (IWO) algorithm is introduced and used as a novel algorithm to solve many continuous optimization problems. In this study, the modified algorithm of IWO was designed, implemented, evaluated, and compared with the genetic algorithm (GA) to solve the PUMQPP in a directed urban transportation network. In comparison with the GA, the results have shown the significant superiority of the proposed modified IWO algorithm in exploring a discrete search-space of the urban transportation network. In this regard, the proposed modified IWO algorithm has reached better results in fitness function, quality metric and running-time values in comparison with those of the GA.
Ultra-Thin, Triple-Bandgap GaInP/GaAs/GaInAs Monolithic Tandem Solar Cells
NASA Technical Reports Server (NTRS)
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, Sarah; Moriarty, T.; Romero, M. J.
2007-01-01
The performance of state-of-the-art, series-connected, lattice-matched (LM), triple-junction (TJ), III-V tandem solar cells could be improved substantially (10-12%) by replacing the Ge bottom subcell with a subcell having a bandgap of approx.1 eV. For the last several years, research has been conducted by a number of organizations to develop approx.1-eV, LM GaInAsN to provide such a subcell, but, so far, the approach has proven unsuccessful. Thus, the need for a high-performance, monolithically integrable, 1-eV subcell for TJ tandems has remained. In this paper, we present a new TJ tandem cell design that addresses the above-mentioned problem. Our approach involves inverted epitaxial growth to allow the monolithic integration of a lattice-mismatched (LMM) approx.1- eV GaInAs/GaInP double-heterostructure (DH) bottom subcell with LM GaAs (middle) and GaInP (top) upper subcells. A transparent GaInP compositionally graded layer facilitates the integration of the LM and LMM components. Handle-mounted, ultra-thin device fabrication is a natural consequence of the inverted-structure approach, which results in a number of advantages, including robustness, potential low cost, improved thermal management, incorporation of back-surface reflectors, and possible reclamation/reuse of the parent crystalline substrate for further cost reduction. Our initial work has concerned GaInP/GaAs/GaInAs tandem cells grown on GaAs substrates. In this case, the 1- eV GaInAs experiences 2.2% compressive LMM with respect to the substrate. Specially designed GaInP graded layers are used to produce 1-eV subcells with performance parameters nearly equaling those of LM devices with the same bandgap (e.g., LM, 1-eV GaInAsP grown on InP). Previously, we reported preliminary ultra-thin tandem devices (0.237 cm2) with NREL-confirmed efficiencies of 31.3% (global spectrum, one sun) (1), 29.7% (AM0 spectrum, one sun) (2), and 37.9% (low-AOD direct spectrum, 10.1 suns) (3), all at 25 C. Here, we include
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
RBT-GA: a novel metaheuristic for solving the multiple sequence alignment problem
Taheri, Javid; Zomaya, Albert Y
2009-01-01
Background Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. Results This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. Conclusion RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences. PMID:19594869
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer; Sahin, Ferat; Rao, T. M.
2004-08-01
When wireless sensors are capable of variable transmit power and are battery powered, it is important to select the appropriate transmit power level for the node. Lowering the transmit power of the sensor nodes imposes a natural clustering on the network and has been shown to improve throughput of the network. However, a common transmit power level is not appropriate for inhomogeneous networks. A possible fitness-based approach, motivated by an evolutionary optimization technique, Particle Swarm Optimization (PSO) is proposed and extended in a novel way to determine the appropriate transmit power of each sensor node. A distributed version of PSO is developed and explored using experimental fitness to achieve an approximation of least-cost connectivity.
NASA Astrophysics Data System (ADS)
Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.
2016-02-01
Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As
Bennett, Herbert S; Filliben, James J
2002-01-01
A critical issue identified in both the technology roadmap from the Optoelectronics Industry Development Association and the roadmaps from the National Electronics Manufacturing Initiative, Inc. is the need for predictive computer simulations of processes, devices, and circuits. The goal of this paper is to respond to this need by representing the extensive amounts of theoretical data for transport properties in the multi-dimensional space of mole fractions of AlAs in Ga1- x Al x As, dopant densities, and carrier densities in terms of closed form analytic expressions. Representing such data in terms of closed-form analytic expressions is a significant challenge that arises in developing computationally efficient simulations of microelectronic and optoelectronic devices. In this paper, we present a methodology to achieve the above goal for a class of numerical data in the bounded two-dimensional space of mole fraction of AlAs and dopant density. We then apply this methodology to obtain closed-form analytic expressions for the effective intrinsic carrier concentrations at 300 K in n-type and p-type Ga1- x Al x As as functions of the mole fraction x of AlAs between 0.0 and 0.3. In these calculations, the donor density N D for n-type material varies between 10(16) cm(-3) and 10(19) cm(-3) and the acceptor density N A for p-type materials varies between 10(16) cm(-3) and 10(20) cm(-3). We find that p-type Ga1- x Al x As presents much greater challenges for obtaining acceptable analytic fits whenever acceptor densities are sufficiently near the Mott transition because of increased scatter in the numerical computer results for solutions to the theoretical equations. The Mott transition region in p-type Ga1- x Al x As is of technological significance for mobile wireless communications systems. This methodology and its associated principles, strategies, regression analyses, and graphics are expected to be applicable to other problems beyond the specific case of effective
Bennett, Herbert S.; Filliben, James J.
2002-01-01
A critical issue identified in both the technology roadmap from the Optoelectronics Industry Development Association and the roadmaps from the National Electronics Manufacturing Initiative, Inc. is the need for predictive computer simulations of processes, devices, and circuits. The goal of this paper is to respond to this need by representing the extensive amounts of theoretical data for transport properties in the multi-dimensional space of mole fractions of AlAs in Ga1−xAlxAs, dopant densities, and carrier densities in terms of closed form analytic expressions. Representing such data in terms of closed-form analytic expressions is a significant challenge that arises in developing computationally efficient simulations of microelectronic and optoelectronic devices. In this paper, we present a methodology to achieve the above goal for a class of numerical data in the bounded two-dimensional space of mole fraction of AlAs and dopant density. We then apply this methodology to obtain closed-form analytic expressions for the effective intrinsic carrier concentrations at 300 K in n-type and p-type Ga1−xAlxAs as functions of the mole fraction x of AlAs between 0.0 and 0.3. In these calculations, the donor density ND for n-type material varies between 1016 cm−3 and 1019 cm−3 and the acceptor density NA for p-type materials varies between 1016 cm−3 and 1020 cm−3. We find that p-type Ga1−xAlxAs presents much greater challenges for obtaining acceptable analytic fits whenever acceptor densities are sufficiently near the Mott transition because of increased scatter in the numerical computer results for solutions to the theoretical equations. The Mott transition region in p-type Ga1−xAlxAs is of technological significance for mobile wireless communications systems. This methodology and its associated principles, strategies, regression analyses, and graphics are expected to be applicable to other problems beyond the specific case of effective intrinsic carrier
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology - the gene switch and the Griffith model of a genetic oscillator—and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them. PMID:26930199
Visibility conflict resolution for multiple antennae and multi-satellites via genetic algorithm
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Hyun, Chung; Ahn, Hyosung; Wang, Semyung; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
Satellite mission control systems typically are operated by scheduling missions to the visibility between ground stations and satellites. The communication for the mission is achieved by interacting with satellite visibility and ground station support. Specifically, the satellite forms a cone-type visibility passing over a ground station, and the antennas of ground stations support the satellite. When two or more satellites pass by at the same time or consecutively, the satellites may generate a visibility conflict. As the number of satellites increases, solving visibility conflict becomes important issue. In this study, we propose a visibility conflict resolution algorithm of multi-satellites by using a genetic algorithm (GA). The problem is converted to scheduling optimization modeling. The visibility of satellites and the supports of antennas are considered as tasks and resources individually. The visibility of satellites is allocated to the total support time of antennas as much as possible for users to obtain the maximum benefit. We focus on a genetic algorithm approach because the problem is complex and not defined explicitly. The genetic algorithm can be applied to such a complex model since it only needs an objective function and can approach a global optimum. However, the mathematical proof of global optimality for the genetic algorithm is very challenging. Therefore, we apply a greedy algorithm and show that our genetic approach is reasonable by comparing with the performance of greedy algorithm application.
Genetic algorithm and the application for job shop group scheduling
NASA Astrophysics Data System (ADS)
Mao, Jianzhong; Wu, Zhiming
1995-08-01
Genetic algorithm (GA) is a heuristic and random search technique mimicking nature. This paper first presents the basic principle of GA, the definition and the function of the genetic operators, and the principal character of GA. On the basis of these, the paper proposes using GA as a new solution method of the job-shop group scheduling problem, discusses the coded representation method of the feasible solution, and the particular limitation to the genetic operators.
Genetic Algorithms with Local Minimum Escaping Technique
NASA Astrophysics Data System (ADS)
Tamura, Hiroki; Sakata, Kenichiro; Tang, Zheng; Ishii, Masahiro
In this paper, we propose a genetic algorithm(GA) with local minimum escaping technique. This proposed method uses the local minimum escaping techique. It can escape from the local minimum by correcting parameters when genetic algorithm falls into a local minimum. Simulations are performed to scheduling problem without buffer capacity using this proposed method, and its validity is shown.
An Evaluation of Potentials of Genetic Algorithm in Shortest Path Problem
NASA Astrophysics Data System (ADS)
Hassany Pazooky, S.; Rahmatollahi Namin, Sh; Soleymani, A.; Samadzadegan, F.
2009-04-01
One of the most typical issues considered in combinatorial systems in transportation networks, is the shortest path problem. In such networks, routing has a significant impact on the network's performance. Due to natural complexity in transportation networks and strong impact of routing in different fields of decision making, such as traffic management and vehicle routing problem (VRP), appropriate solutions to solve this problem are crucial to be determined. During last years, in order to solve the shortest path problem, different solutions are proposed. These techniques are divided into two categories of classic and evolutionary approaches. Two well-known classic algorithms are Dijkstra and A*. Dijkstra is known as a robust, but time consuming algorithm in finding the shortest path problem. A* is also another algorithm very similar to Dijkstra, less robust but with a higher performance. On the other hand, Genetic algorithms are introduced as most applicable evolutionary algorithms. Genetic Algorithm uses a parallel search method in several parts of the domain and is not trapped in local optimums. In this paper, the potentiality of Genetic algorithm for finding the shortest path is evaluated by making a comparison between this algorithm and classic algorithms (Dijkstra and A*). Evaluation of the potential of these techniques on a transportation network in an urban area shows that due to the problem of classic methods in their small search space, GA had a better performance in finding the shortest path.
Site-controlled InGaN/GaN single-photon-emitting diode
NASA Astrophysics Data System (ADS)
Zhang, Lei; Teng, Chu-Hsiang; Ku, Pei-Cheng; Deng, Hui
2016-04-01
We report single-photon emission from electrically driven site-controlled InGaN/GaN quantum dots. The device is fabricated from a planar light-emitting diode structure containing a single InGaN quantum well, using a top-down approach. The location, dimension, and height of each single-photon-emitting diode are controlled lithographically, providing great flexibility for chip-scale integration.
Simplified 2DEG carrier concentration model for composite barrier AlGaN/GaN HEMT
Das, Palash Biswas, Dhrubes
2014-04-24
The self consistent solution of Schrodinger and Poisson equations is used along with the total charge depletion model and applied with a novel approach of composite AlGaN barrier based HEMT heterostructure. The solution leaded to a completely new analytical model for Fermi energy level vs. 2DEG carrier concentration. This was eventually used to demonstrate a new analytical model for the temperature dependent 2DEG carrier concentration in AlGaN/GaN HEMT.
Satellite remote sensing offers synoptic and frequent monitoring of optical water quality parameters, such as chlorophyll-a, turbidity, and colored dissolved organic matter (CDOM). While traditional satellite algorithms were developed for the open ocean, these algorithms often do...
GaInP/GaAs/GaInAs Monolithic Tandem Cells for High-Performance Solar Concentrators
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, S.; Moriarty, T.; Romero, M. J.
2005-08-01
We present a new approach for ultra-high-performance tandem solar cells that involves inverted epitaxial growth and ultra-thin device processing. The additional degree of freedom afforded by the inverted design allows the monolithic integration of high-, and medium-bandgap, lattice-matched (LM) subcell materials with lower-bandgap, lattice-mismatched (LMM) materials in a tandem structure through the use of transparent compositionally graded layers. The current work concerns an inverted, series-connected, triple-bandgap, GaInP (LM, 1.87 eV)/GaAs (LM, 1.42 eV)/GaInAs (LMM, {approx}1 eV) device structure grown on a GaAs substrate. Ultra-thin tandem devices are fabricated by mounting the epiwafers to pre-metallized Si wafer handles and selectively removing the parent GaAs substrate. The resulting handle-mounted, ultra-thin tandem cells have a number of important advantages, including improved performance and potential reclamation/reuse of the parent substrate for epitaxial growth. Additionally, realistic performance modeling calculations suggest that terrestrial concentrator efficiencies in the range of 40-45% are possible with this new tandem cell approach. A laboratory-scale (0.24 cm2), prototype GaInP/GaAs/GaInAs tandem cell with a terrestrial concentrator efficiency of 37.9% at a low concentration ratio (10.1 suns) is described, which surpasses the previous world efficiency record of 37.3%.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
AlGaAs ridge laser with 33% wall-plug efficiency at 100 °C based on a design of experiments approach
NASA Astrophysics Data System (ADS)
Fecioru, Alin; Boohan, Niall; Justice, John; Gocalinska, Agnieszka; Pelucchi, Emanuele; Gubbins, Mark A.; Mooney, Marcus B.; Corbett, Brian
2016-04-01
Upcoming applications for semiconductor lasers present limited thermal dissipation routes demanding the highest efficiency devices at high operating temperatures. This paper reports on a comprehensive design of experiment optimisation for the epitaxial layer structure of AlGaAs based 840 nm lasers for operation at high temperature (100 °C) using Technology Computer-Aided Design software. The waveguide thickness, Al content, doping level, and quantum well thickness were optimised. The resultant design was grown and the fabricated ridge waveguides were optimised for carrier injection and, at 100 °C, the lasers achieve a total power output of 28 mW at a current of 50 mA, a total slope efficiency 0.82 W A-1 with a corresponding wall-plug efficiency of 33%.
Comparison of MLR, PLS and GA-MLR in QSAR analysis.
Saxena, A K; Prathipati, P
2003-01-01
The use of the internet has evolved in quantitative structure-activity relationship (QSAR) over the past decade with the development of web based activities like the availability of numerous public domain software tools for descriptor calculation and chemometric toolboxes. The importance of chemometrics in QSAR has accelerated in recent years for processing the enormous amount of information in form of predictive mathematical models for large datasets of molecules. With the availability of huge numbers of physicochemical and structural parameters, variable selection became crucial in deriving interpretable and predictive QSAR models. Among several approaches to address this problem, the principle component regression (PCR) and partial least squares (PLS) analyses provide highly predictive QSAR models but being more abstract, they are difficult to understand and interpret. Genetic algorithm (GA) is a stochastic method well suited to the problem of variable selection and to solve optimization problems. Consequently the hybrid approach (GA-MLR) combining GA with multiple linear regression (MLR) may be useful in derivation of highly predictive and interpretable QSAR models. In view of the above, a comparative study of stepwise-MLR, PLS and GA-MLR in deriving QSAR models for datasets of alpha1-adrenoreceptor antagonists and beta3-adrenoreceptor agonists has been carried out using the public domain software Dragon for computing descriptors and free Matlab codes for data modeling. PMID:14758986
Focusing through a turbid medium by amplitude modulation with genetic algorithm
NASA Astrophysics Data System (ADS)
Dai, Weijia; Peng, Ligen; Shao, Xiaopeng
2014-05-01
Multiple scattering of light in opaque materials such as white paint and human tissue forms a volume speckle field, will greatly reduce the imaging depth and degrade the imaging quality. A novel approach is proposed to focus light through a turbid medium using amplitude modulation with genetic algorithm (GA) from speckle patterns. Compared with phase modulation method, amplitude modulation approach, in which the each element of spatial light modulator (SLM) is either zero or one, is much easier to achieve. Theoretical and experimental results show that, the advantage of GA is more suitable for low the signal to noise ratio (SNR) environments in comparison to the existing amplitude control algorithms such as binary amplitude modulation. The circular Gaussian distribution model and Rayleigh Sommerfeld diffraction theory are employed in our simulations to describe the turbid medium and light propagation between optical devices, respectively. It is demonstrated that the GA technique can achieve a higher overall enhancement, and converge much faster than others, and outperform all algorithms at high noise. Focusing through a turbid medium has potential in the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Maximizing flexure jointed hexapod vibration isolation using a modified genetic algorithm
NASA Astrophysics Data System (ADS)
Guo, Zhijiang; McInroy, John E.
2004-07-01
In this paper we propose the use of the genetic algorithm (GA) as a tool to solve multi-objective optimization problems in flexure jointed hexapods. Using the concept of heuristic mutation, a modified GA-based multi-objective optimization technique is proposed and the passive parameters' optimization problems in a flexure jointed hexapod system are solved. The passive parameters found include the spring and the damping parameters in each strut of the hexapod. The results produced by this new approach are compared to those produced by other practical selection techniques, proving that this technique is more flexible. Thus, the genetic algorithm can be used as a reliable numerical optimization tool in such problems.
Determination of composition of non-homogeneous GaInNAs layers
NASA Astrophysics Data System (ADS)
Pucicki, D.; Bielak, K.; Ściana, B.; Radziewicz, D.; Latkowska-Baranowska, M.; Kováč, J.; Vincze, A.; Tłaczała, M.
2016-01-01
Dilute nitride GaInNAs alloys grown on GaAs have become perspective materials for so called low-cost GaAs-based devices working within the optical wavelength range up to 1.6 μm. The multilayer structures of GaInNAs/GaAs multi-quantum well (MQW) samples usually are analyzed by using high resolution X-ray diffraction (HRXRD) measurements. However, demands for precise structural characterization of the GaInNAs containing heterostructures requires taking into consideration all inhomogeneities of such structures. This paper describes some of the material challenges and progress in structural characterization of GaInNAs layers. A new algorithm for structural characterization of dilute nitrides which bounds contactless electro-reflectance (CER) or photo-reflectance (PR) measurements and HRXRD analysis results together with GaInNAs quantum well band diagram calculation is presented. The triple quantum well (3QW) GaInNAs/GaAs structures grown by atmospheric-pressure metalorganic vapor-phase epitaxy (AP-MOVPE) were investigated according to the proposed algorithm. Thanks to presented algorithm, more precise structural data including the nonuniformity in the growth direction of GaInNAs/GaAs QWs were achieved. Therefore, the proposed algorithm is mentioned as a nondestructive method for characterization of multicomponent inhomogeneous semiconductor structures with quantum wells.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
NASA Astrophysics Data System (ADS)
Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.
2014-12-01
Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
NASA Astrophysics Data System (ADS)
Krishna, Hemanth; Kumar, Hemantha; Gangadharan, Kalluvalappil
2016-06-01
A magneto rheological (MR) fluid damper offers cost effective solution for semiactive vibration control in an automobile suspension. The performance of MR damper is significantly depends on the electromagnetic circuit incorporated into it. The force developed by MR fluid damper is highly influenced by the magnetic flux density induced in the fluid flow gap. In the present work, optimization of electromagnetic circuit of an MR damper is discussed in order to maximize the magnetic flux density. The optimization procedure was proposed by genetic algorithm and design of experiments techniques. The result shows that the fluid flow gap size less than 1.12 mm cause significant increase of magnetic flux density.
Genetic algorithms for the construction of D-optimal designs
Heredia-Langner, Alejandro; Carlyle, W M.; Montgomery, D C.; Borror, Connie M.; Runger, George C.
2003-01-01
Computer-generated designs are useful for situations where standard factorial, fractional factorial or response surface designs cannot be easily employed. Alphabetically-optimal designs are the most widely used type of computer-generated designs, and of these, the D-optimal (or D-efficient) class of designs are extremely popular. D-optimal designs are usually constructed by algorithms that sequentially add and delete points from a potential design based using a candidate set of points spaced over the region of interest. We present a technique to generate D-efficient designs using genetic algorithms (GA). This approach eliminates the need to explicitly consider a candidate set of experimental points and it can handle highly constrained regions while maintaining a level of performance comparable to more traditional design construction techniques.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911
Machine Computation; An Algorithmic Approach.
ERIC Educational Resources Information Center
Gonzalez, Richard F.; McMillan, Claude, Jr.
Designed for undergraduate social science students, this textbook concentrates on using the computer in a straightforward way to manipulate numbers and variables in order to solve problems. The text is problem oriented and assumes that the student has had little prior experience with either a computer or programing languages. An introduction to…
Pruning Neural Networks with Distribution Estimation Algorithms
Cantu-Paz, E
2003-01-15
This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.
Genetic algorithm-based neural fuzzy decision tree for mixed scheduling in ATM networks.
Lin, Chin-Teng; Chung, I-Fang; Pu, Her-Chang; Lee', Tsern-Huei; Chang, Jyh-Yeong
2002-01-01
Future broadband integrated services networks based on asynchronous transfer mode (ATM) technology are expected to support multiple types of multimedia information with diverse statistical characteristics and quality of service (QoS) requirements. To meet these requirements, efficient scheduling methods are important for traffic control in ATM networks. Among general scheduling schemes, the rate monotonic algorithm is simple enough to be used in high-speed networks, but does not attain the high system utilization of the deadline driven algorithm. However, the deadline driven scheme is computationally complex and hard to implement in hardware. The mixed scheduling algorithm is a combination of the rate monotonic algorithm and the deadline driven algorithm; thus it can provide most of the benefits of these two algorithms. In this paper, we use the mixed scheduling algorithm to achieve high system utilization under the hardware constraint. Because there is no analytic method for schedulability testing of mixed scheduling, we propose a genetic algorithm-based neural fuzzy decision tree (GANFDT) to realize it in a real-time environment. The GANFDT combines a GA and a neural fuzzy network into a binary classification tree. This approach also exploits the power of the classification tree. Simulation results show that the GANFDT provides an efficient way of carrying out mixed scheduling in ATM networks. PMID:18244889
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-01-01
In this paper, a simple and controllable "wet pulse annealing" technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm(2) V(-1) s(-1); Ion/Ioff ratio ≈ 10(8); reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances. PMID:27198067
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-01-01
In this paper, a simple and controllable “wet pulse annealing” technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm2 V−1 s−1; Ion/Ioff ratio ≈ 108; reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances. PMID:27198067
NASA Astrophysics Data System (ADS)
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-05-01
In this paper, a simple and controllable “wet pulse annealing” technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm2 V‑1 s‑1 Ion/Ioff ratio ≈ 108 reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances.
Hauschild, Dirk; Handick, Evelyn; Göhl-Gusenleitner, Sina; Meyer, Frank; Schwab, Holger; Benkert, Andreas; Pohlner, Stephan; Palm, Jörg; Tougaard, Sven; Heske, Clemens; Weinhardt, Lothar; Reinert, Friedrich
2016-08-17
Using reflection electron energy loss spectroscopy (REELS), we have investigated the optical properties at the surface of a chalcopyrite-based Cu(In,Ga)(S,Se)2 (CIGSSe) thin-film solar cell absorber, as well as an indium sulfide (InxSy) buffer layer before and after annealing. By fitting the characteristic inelastic scattering cross-section λK(E) to cross sections evaluated by the QUEELS-ε(k,ω)-REELS software package, we determine the surface dielectric function and optical properties of these samples. A comparison of the optical values at the surface of the InxSy film with bulk ellipsometry measurements indicates a good agreement between bulk- and surface-related optical properties. In contrast, the properties of the CIGSSe surface differ significantly from the bulk. In particular, a larger (surface) band gap than for bulk-sensitive measurements is observed, providing a complementary and independent confirmation of earlier photoelectron spectroscopy results. Finally, we derive the inelastic mean free path λ for electrons in InxSy, annealed InxSy, and CIGSSe at a kinetic energy of 1000 eV. PMID:27463021
Backtracking search algorithm for effective and efficient surface wave analysis
NASA Astrophysics Data System (ADS)
Song, Xianhai; Zhang, Xueqiang; Zhao, Sutao; Li, Lei
2015-03-01
Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on backtracking search algorithm (BSA), a novel and powerful evolutionary algorithm (EA). Development of BSA is motivated by studies that attempt to develop an algorithm that possesses desirable features for different optimization problems which include the ability to reach a problem's global minimum more quickly and successfully with a small number of control parameters and low computational cost, as well as robustness and ease of application to different problem models. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and effectiveness of BSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of BSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on real surface wave data. Furthermore, the performance of BSA is compared against that of GA by real data to further evaluate scores of BSA. Results from both synthetic and actual data demonstrate that BSA applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of BSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.
A Moving Target Environment for Computer Configurations Using Genetic Algorithms
Crouse, Michael; Fulp, Errin W.
2011-10-31
Moving Target (MT) environments for computer systems provide security through diversity by changing various system properties that are explicitly defined in the computer configuration. Temporal diversity can be achieved by making periodic configuration changes; however in an infrastructure of multiple similarly purposed computers diversity must also be spatial, ensuring multiple computers do not simultaneously share the same configuration and potential vulnerabilities. Given the number of possible changes and their potential interdependencies discovering computer configurations that are secure, functional, and diverse is challenging. This paper describes how a Genetic Algorithm (GA) can be employed to find temporally and spatially diverse secure computer configurations. In the proposed approach a computer configuration is modeled as a chromosome, where an individual configuration setting is a trait or allele. The GA operates by combining multiple chromosomes (configurations) which are tested for feasibility and ranked based on performance which will be measured as resistance to attack. The result of successive iterations of the GA are secure configurations that are diverse due to the crossover and mutation processes. Simulations results will demonstrate this approach can provide at MT environment for a large infrastructure of similarly purposed computers by discovering temporally and spatially diverse secure configurations.
Double Motor Coordinated Control Based on Hybrid Genetic Algorithm and CMAC
NASA Astrophysics Data System (ADS)
Cao, Shaozhong; Tu, Ji
A novel hybrid cerebellar model articulation controller (CMAC) and online adaptive genetic algorithm (GA) controller is introduced to control two Brushless DC motor (BLDCM) which applied in a biped robot. Genetic Algorithm simulates the random learning among the individuals of a group, and CMAC simulates the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments have been done in MATLAB/SIMULINK. Analysis among GA, hybrid GA-CMAC and CMAC feed-forward control is also given. The results prove that the torque ripple of the coordinated control system is eliminated by using the hybrid GA-CMAC algorithm.
Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft
NASA Technical Reports Server (NTRS)
Wells, Valana L.
1996-01-01
This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.
A new system for synthesis of high quality nonpolar GaN thin films.
Li, Guoqiang; Shih, Shao-Ju; Fu, Zhengyi
2010-02-28
High quality nonpolar m-plane GaN films were successfully grown on LiGaO(2) (100) substrates for the first time. This m-plane GaN/LiGaO(2) (100) system opens a new approach for realizing highly-efficient nitride devices. PMID:20449251
Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network
NASA Astrophysics Data System (ADS)
Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun
2011-03-01
In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).
NASA Astrophysics Data System (ADS)
Ehret, Uwe
2016-04-01
approximation error than for an unstructured data set such as white noise. Knowledge of this Pareto optimum can be useful for the design of sampling strategies. It is also interesting to analyze the spatio-temporal distribution of the most relevant nodes of the data set (those with the largest information gain): Homogeneously spaced nodes indicate a data set of constant predictability throughout its extent, or low complexity, while heterogeneously spaced nodes indicate shifting patterns of local predictability, which is an attribute of higher complex data sets (if 'complexity' is defined as 'high overall uncertainty about local uncertainty'). Interpolation of data sets The structogram can also be used for interpolation, i.e. estimation at nodes where no observations are available. The idea of structogram-based interpolation is that, just as for Kriging, the estimation is a weighted linear combination of the observations, but here the weights are not determined based on the Variogram and the intrinsic hypothesis, but on the relevance of the nodes: Highly relevant nodes are given higher weights than lesser relevant nodes. Testing many different data sets revealed that for 'smooth' data sets, where proximity means similarity, classical Kriging-based interpolation outperforms structogram-based approaches, while for intermittent data sets such as rainfall time-series, where proximity does not always mean similarity, structogram-based interpolation performs better. References Ramer, U.: An iterative procedure for the polygonal approximation of plane curves, Computer Graphics and Image Processing, 1, 244-256, http://dx.doi.org/10.1016/S0146-664X(72)80017-0, 1972. Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: The Canadian Cartographer. Bd. 10, Nr. 2, 1973, ISSN 0008-3127, S. 112-122, 1973.
Optimization of solar air collector using genetic algorithm and artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Şencan Şahin, Arzu
2012-11-01
Thermal performance of solar air collector depends on many parameters as inlet air temperature, air velocity, collector slope and properties related to collector. In this study, the effect of the different parameters which affect the performance of the solar air collector are investigated. In order to maximize the thermal performance of a solar air collector genetic algorithm (GA) and artificial bee colony algorithm (ABC) have been used. The results obtained indicate that GA and ABC algorithms can be applied successfully for the optimization of the thermal performance of solar air collector.
Wang, Y; Guo, G D; Chen, L F
2013-01-01
Frediction of the three-dimensional structure of a protein from its amino acid sequence can be considered as a global optimization problem. In this paper, the Chaotic Artificial Bee Colony (CABC) algorithm was introduced and applied to 3D protein structure prediction. Based on the 3D off-lattice AB model, the CABC algorithm combines global search and local search of the Artificial Bee Colony (ABC) algorithm with the Chaotic search algorithm to avoid the problem of premature convergence and easily trapping the local optimum solution. The experiments carried out with the popular Fibonacci sequences demonstrate that the proposed algorithm provides an effective and high-performance method for protein structure prediction. PMID:25509864
Hybrid evolutionary algorithms for network-centric command and control
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Nichols, Tom
2006-05-01
Network-centric force optimization is the problem of threat engagement and dynamic Weapon-Target Allocation (WTA) across the force. The goal is to allocate and schedule defensive weapon resources over a given period of time so as to achieve certain battle management objectives subject to resource and temporal constraints. The problem addresses in this paper is one of dynamic WTA and involves optimization across both resources (weapons) and time. We henceforth refer to this problem as the Weapon Allocation and Scheduling problem (WAS). This paper addresses and solves the WAS problem for two separate battle management objectives: (1) Threat Kill Maximization (TKM), and (2) Asset Survival Maximization (ASM). Henceforth, the WAS problems for the above objectives are referred to as the WAS-TKM and WAS-ASM, respectively. Both WAS problems are NP-complete problem and belong to a class of multiple-resource-constrained optimal scheduling problems. While the above objectives appear to be intuitively similar from a battle management perspective, the two optimal scheduling problems are quite different in their complexity. We present a hybrid genetic algorithm (GA) that is a combination of a traditional genetic algorithm and a simulated annealing-type algorithm for solving these problems. The hybrid GA approach proposed here uses a simulated annealing-type heuristics to compute the fitness of a GA-selected population. This step also optimizes the temporal dimension (scheduling) under resource and temporal constraints and is significantly different for the WAS-TKM and WAS-ASM problems. The proposed method provides schedules that are near optimal in short cycle times and have minimal perturbation from one cycle to the next.
Status of AlGaAs/GaAs heteroface solar cell technology
NASA Technical Reports Server (NTRS)
Rahilly, W. P.; Anspaugh, B.
1982-01-01
This paper reviews the various GaAs solar cell programs that have been and are now ongoing which are directed at bringing this particular technology to fruition. The discussion emphasizes space application - both concentrator and flat plate. The rationale for pursuing GaAs cell technology is given along with the different cell types (concentrator, flat plate), approaches to fabricate the devices, the hybrid cells under investigation and approaches to reduce cell mass are summarized. The outlook for the use of GaAs cell technology is given within the context for space application.
First-principle natural band alignment of GaN / dilute-As GaNAs alloy
Tan, Chee-Keong Tansu, Nelson
2015-01-15
Density functional theory (DFT) calculations with the local density approximation (LDA) functional are employed to investigate the band alignment of dilute-As GaNAs alloys with respect to the GaN alloy. Conduction and valence band positions of dilute-As GaNAs alloy with respect to the GaN alloy on an absolute energy scale are determined from the combination of bulk and surface DFT calculations. The resulting GaN / GaNAs conduction to valence band offset ratio is found as approximately 5:95. Our theoretical finding is in good agreement with experimental observation, indicating the upward movements of valence band at low-As content dilute-As GaNAs are mainly responsible for the drastic reduction of the GaN energy band gap. In addition, type-I band alignment of GaN / GaNAs is suggested as a reasonable approach for future device implementation with dilute-As GaNAs quantum well, and possible type-II quantum well active region can be formed by using InGaN / dilute-As GaNAs heterostructure.
Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Optimization of experimental design in fMRI: a general framework using a genetic algorithm.
Wager, Tor D; Nichols, Thomas E
2003-02-01
This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Beucher, R.; Brown, R. W.
2013-12-01
One of the most significant advances in interpreting thermochronological data is arguably our ability to extract information about the rate and trajectory of cooling over a range of temperatures, rather than having to rely on the veracity of the simplification of assuming a single closure temperature specified by a rate of monotonic cooling. Modern thermochronometry data, such as apatite fission track and (U-Th)/He analysis, are particularly good examples of data amenable to this treatment as acceptably well calibrated kinetic models now exist for both systems. With ever larger data sets of this type being generated over ever larger areas the prospect of inverting very large amounts of such data distributed spatially over large areas offers new possibilities for constraining the thermal and erosional histories over length scales approximating whole orogens and sub-continents. The challenge though is in how to properly deal with joint inversion of multiple samples in a self-consistent manner while also utilising all the available information contained in the data. We describe a new approach to this problem, called the Community of Family Circles (CFC) algorithm, which extracts information from spatially distributed apatite fission track ages (AFT) and track length distributions (TLD). The method is based on the rationale that the 3D geothermal field of the crust varies smoothly through space and time because of the efficiency of thermal diffusion. Our approach consists of seeking groups of spatially adjacent samples, or families, within a given circular radius for which a common thermal history is appropriate. The temperature offsets between individual time-temperature paths are determined relative to a low-pass filtered topographic surface, whose shape is assumed to mimic the shape of the isotherms in the partial annealing zone. This enables a single common thermal history to be shared, or interpolated, between the family members while still honouring the
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Modeling of Nonlinear Systems using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Hayashi, Kayoko; Yamamoto, Toru; Kawada, Kazuo
In this paper, a newly modeling system by using Genetic Algorithm (GA) is proposed. The GA is an evolutionary computational method that simulates the mechanisms of heredity or evolution of living things, and it is utilized in optimization and in searching for optimized solutions. Most process systems have nonlinearities, so it is necessary to anticipate exactly such systems. However, it is difficult to make a suitable model for nonlinear systems, because most nonlinear systems have a complex structure. Therefore the newly proposed method of modeling for nonlinear systems uses GA. Then, according to the newly proposed scheme, the optimal structure and parameters of the nonlinear model are automatically generated.
Genetic algorithms for modelling and optimisation
NASA Astrophysics Data System (ADS)
McCall, John
2005-12-01
Genetic algorithms (GAs) are a heuristic search and optimisation technique inspired by natural evolution. They have been successfully applied to a wide range of real-world problems of significant complexity. This paper is intended as an introduction to GAs aimed at immunologists and mathematicians interested in immunology. We describe how to construct a GA and the main strands of GA theory before speculatively identifying possible applications of GAs to the study of immunology. An illustrative example of using a GA for a medical optimal control problem is provided. The paper also includes a brief account of the related area of artificial immune systems.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
GaAs/AlOx high-contrast grating mirrors for mid-infrared VCSELs
NASA Astrophysics Data System (ADS)
Almuneau, G.; Laaroussi, Y.; Chevallier, C.; Genty, F.; Fressengeas, N. s.; Cerutti, L.; Gauthier-Lafaye, Olivier
2015-02-01
Mid-infrared Vertical cavity surface emitting lasers (MIR-VCSEL) are very attractive compact sources for spectroscopic measurements above 2μm, relevant for molecules sensing in various application domains. A long-standing issue for long wavelength VCSEL is the large structure thickness affecting the laser properties, added for the MIR to the tricky technological implementation of the antimonide alloys system. In this paper, we propose a new geometry for MIR-VCSEL including both a lateral confinement by an oxide aperture, and a high-contrast sub-wavelength grating mirror (HCG mirror) formed by the high contrast combination AIOx/GaAs in place of GaSb/A|AsSb top Bragg reflector. In addition to drastically simplifying the vertical stack, HCG mirror allows to control through its design the beam properties. The robust design of the HCG has been ensured by an original method of optimization based on particle swarm optimization algorithm combined with an anti-optimization one, thus allowing large error tolerance for the nano-fabrication. Oxide-based electro-optical confinement has been adapted to mid-infrared lasers, byusing a metamorphic approach with (Al) GaAs layer directly epitaxially grown on the GaSb-based VCSEL bottom structure. This approach combines the advantages of the will-controlled oxidation of AlAs layer and the efficient gain media of Sb-based for mid-infrared emission. We finally present the results obtained on electrically pumped mid-IR-VCSELs structures, for which we included oxide aperturing for lateral confinement and HCG as high reflectivity output mirrors, both based on AlxOy/GaAs heterostructures.
Use of genetic algorithms for computer-aided diagnosis of breast cancers from image features
NASA Astrophysics Data System (ADS)
Floyd, Carey E., Jr.; Tourassi, Georgia D.; Baker, Jay A.
1996-04-01
In this investigation we explore genetic algorithms as a technique to train the weights in a feed forward neural network designed to predict breast cancer based on mammographic findings and patient history. Mammograms were obtained from 206 patients who obtained breast biopsies. Mammographic findings were recorded by radiologists for each patient. In addition, the outcome of the biopsy was recorded. Of the 206 cases, 73 were malignant while 133 were benign at the time of biopsy. A genetic algorithm (GA) was developed to adjust the weights of an artificial neural network (ANN) so that the ANN would output the outcome of the biopsy when the mammographic findings were given as inputs. The GA is a technique for function optimization that reflects biological genetic evolution. The ANN was a fully connected feed- forward network using a sigmoid activation with 11 inputs, one hidden layer with 10 nodes, and one output node (benign/malignant). The GA approach allows much flexibility in selecting the function to be optimized. In this work both mean-squared error (MSE) and receiver operating characteristic (ROC) curve area (Az) were explored as optimization criteria. The system was trained using a bootstrap sampling. Optimizing for the two criteria result in different solutions. The 'best' solution was obtained by minimizing a linear combination of MSE and (1-Az). ROC areas were 0.82 plus or minus 0.07, somewhat less than those obtained using backpropagation for ANN training: 0.90 plus or minus 0.05. This is the first description of a genetic algorithm for breast cancer diagnosis. The novel advantage of this technique is the ability to optimize the system for maximizing ROC area rather than minimizing mean squared error. A new technique for computer-aided diagnosis of breast cancer has been explored. The flexibility of the GA approach allows optimization of cost functions that have relevance to breast cancer prediction.
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms
Ortegon, Patricia; Poot-Hernández, Augusto C.; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case. PMID:25973143
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms.
Ortegon, Patricia; Poot-Hernández, Augusto C; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case. PMID:25973143
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
NASA Astrophysics Data System (ADS)
Kerkhoff, A.; Ling, H.
2009-12-01
We apply Pareto genetic algorithm (GA) optimization to the design of antenna elements for use in the Long Wavelength Array (LWA), a large, low-frequency radio telescope currently under development. By manipulating antenna geometry, the Pareto GA simultaneously optimizes the received Galactic background or “sky” noise level and radiation patterns of the antenna over all frequencies. Geometrical constraints are handled explicitly in the GA in order to guarantee the realizability, and to impart control over the monetary cost of the generated designs. The antenna elements considered are broadband planar dipoles arranged horizontally over the ground. It is demonstrated that the Pareto GA approach generates a set of designs, which exhibit a wide range of trade-offs between the two design objectives, and satisfy all constraints. Multiple GA executions are performed to determine how antenna performance trade-offs are affected by different geometrical constraint values, feed impedance values, radiating element shapes and orientations, and ground conditions. Two different planar dipole antenna designs are constructed, and antenna input impedance and sky noise drift scan measurements are performed to validate the results of the GA.
Global path planning of mobile robots using a memetic algorithm
NASA Astrophysics Data System (ADS)
Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen
2015-08-01
In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Using a hybrid genetic algorithm and fuzzy logic for metabolic modeling
Yen, J.; Lee, B.; Liao, J.C.
1996-12-31
The identification of metabolic systems is a complex task due to the complexity of the system and limited knowledge about the model. Mathematical equations and ODE`s have been used to capture the structure of the model, and the conventional optimization techniques have been used to identify the parameters of the model. In general, however, a pure mathematical formulation of the model is difficult due to parametric uncertainty and incomplete knowledge of mechanisms. In this paper, we propose a modeling approach that (1) uses fuzzy rule-based model to augment algebraic enzyme models that are incomplete, and (2) uses a hybrid genetic algorithm to identify uncertain parameters in the model. The hybrid genetic algorithm (GA) integrates a GA with the simplex method in functional optimization to improve the GA`s convergence rate. We have applied this approach to modeling the rate of three enzyme reactions in E. coli central metabolism. The proposed modeling strategy allows (1) easy incorporation of qualitative insights into a pure mathematical model and (2) adaptive identification and optimization of key parameters to fit system behaviors observed in biochemical experiments.
High-quality eutectic-metal-bonded AlGaAs-GaAs thin films on Si substrates
NASA Astrophysics Data System (ADS)
Venkatasubramanian, R.; Timmons, M. L.; Humphreys, T. P.; Keyes, B. M.; Ahrenkiel, R. K.
1992-02-01
Device quality GaAs-AlGaAs thin films have been obtained on Si substrates, using a novel approach called eutectic-metal-bonding (EMB). This involves the lattice-matched growth of GaAs-AlGaAs thin films on Ge substrates, followed by bonding onto a Si wafer. The Ge substrates are selectively removed by a CF4/O2 plasma etch, leaving high-quality GaAs-AlGaAs thin films on Si substrates. A minority-carrier lifetime of 103 ns has been obtained in a EMB GaAs-AlGaAs double heterostructure on Si, which is nearly forty times higher than the state-of-the-art lifetime for heteroepitaxial GaAs on Si, and represents the largest reported minority-carrier lifetime for a freestanding GaAs thin film. In addition, a negligible residual elastic strain in the EMB GaAs-AlGaAs films has been determined from Raman spectroscopy measurements.
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
Investigation of range extension with a genetic algorithm
Austin, A. S., LLNL
1998-03-04
Range optimization is one of the tasks associated with the development of cost- effective, stand-off, air-to-surface munitions systems. The search for the optimal input parameters that will result in the maximum achievable range often employ conventional Monte Carlo techniques. Monte Carlo approaches can be time-consuming, costly, and insensitive to mutually dependent parameters and epistatic parameter effects. An alternative search and optimization technique is available in genetic algorithms. In the experiments discussed in this report, a simplified platform motion simulator was the fitness function for a genetic algorithm. The parameters to be optimized were the inputs to this motion generator and the simulator`s output (terminal range) was the fitness measure. The parameters of interest were initial launch altitude, initial launch speed, wing angle-of-attack, and engine ignition time. The parameter values the GA produced were validated by Monte Carlo investigations employing a full-scale six-degree-of-freedom (6 DOF) simulation. The best results produced by Monte Carlo processes using values based on the GA derived parameters were within - 1% of the ranges generated by the simplified model using the evolved parameter values. This report has five sections. Section 2 discusses the motivation for the range extension investigation and reviews the surrogate flight model developed as a fitness function for the genetic algorithm tool. Section 3 details the representation and implementation of the task within the genetic algorithm framework. Section 4 discusses the results. Section 5 concludes the report with a summary and suggestions for further research.
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
A genetic algorithm for ground-based telescope observation scheduling
NASA Astrophysics Data System (ADS)
Mahoney, William; Veillet, Christian; Thanjavur, Karun
2012-09-01
A prototype genetic algorithm (GA) is being developed to provide assisted and ultimately automated observation scheduling functionality. Harnessing the logic developed for manual queue preparation, the GA can build suitable sets of queues for the potential combinations of environmental and atmospheric conditions. Evolving one step further, the GA can select the most suitable observation for any moment in time, based on allocated priorities, agency balances, and realtime availability of the skies' condition.
GA-Based Computer-Aided Electromagnetic Design of Two-Phase SRM for Compressor Drives
NASA Astrophysics Data System (ADS)
Kano, Yoshiaki; Kosaka, Takashi; Matsui, Nobuyuki
This paper presents an approach to Genetic Algorithm (GA)-based computer-aided autonomous electromagnetic design of 2-phase Switched Reluctance Motor drives. The proposed drive is designed for compressor drives in low-priced refrigerators as an alternative to existing brushless DC motors drives with rare-earth magnets. In the proposed design approach, three GA loops work to optimize the lamination design so as to meet the requirements for the target application under the given constraints while simultaneously fine-tuning the control parameters. To achieve the design optimization within an acceptable CPU-time, the repeated-calculation required to obtain fitness evaluation in the proposed approach does not use FEM, but consists of geometric flux tube-based non-linear magnetic analysis and a dynamic simulator based on an analytical expression of the magnetizing curves obtained from the non-linear magnetic analysis. The design results show the proposed approach can autonomously find a feasible design solution of SRM drive for the target application from huge search space. The experimental studies using a 2-phase 8/6 prototype manufactured in accordance with the optimized design parameters show the validity of the proposed approach.
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
NASA Astrophysics Data System (ADS)
Hassanzadeh, Zeinabe; Kompany-Zareh, Mohsen; Ghavami, Raouf; Gholami, Somayeh; Malek-Khatabi, Atefe
2015-10-01
The configuring of a radial basis function neural network (RBFN) consists of optimizing the architecture and the network parameters (centers, widths, and weights). Methods such as genetic algorithm (GA), K-means and cluster analysis (CA) are among center selection methods. In the most of reports on RBFN modeling optimum centers are selected among rows of descriptors matrix. A combination of RBFN and GA is introduced for better description of quantitative structure-property relationships (QSPR) models. In this method, centers are not exactly rows of the independent matrix and can be located in any point of the samples space. In the proposed approach, initial centers are randomly selected from the calibration set. Then GA changes the locations of the initially selected centers to find the optimum positions of centers from the whole space of scores matrix, in order to obtain highest prediction ability. This approach is called whole space GA-RBFN (wsGA-RBFN) and applied to predict the adsorption coefficients (logk), of 40 small molecules on the surface of multi-walled carbon nanotubes (MWCNTs). The data consists of five solute descriptors [R, π, α, β, V] of the molecules and known as data set1. Prediction ability of wsGA-RBFN is compared to GA-RBFN and MLR models. The obtained Q2 values for wsGA-RBFN, GA-RBFN and MLR are 0.95, 0.85, and 0.78, respectively, which shows the merit of wsGA-RBFN. The method is also applied on the logarithm of surface area normalized adsorption coefficients (logKSA), of organic compounds (OCs) on MWCNTs surface. The data set2 includes 69 aromatic molecules with 13 physicochemical properties of the OCs. Thirty-nine of these molecules were similar to those of data set1 and the others were aromatic compounds included of small and big molecules. Prediction ability of wsGA-RBFN for second data set was compared to GA-RBF. The Q2 values for wsGA-RBFN and GA-RBF are obtained as 0.89 and 0.80, respectively.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
NASA Astrophysics Data System (ADS)
Salami, M. J. E.; Tijani, I. B.; Abdullateef, A. I.; Aibinu, M. A.
2013-12-01
A hybrid optimization algorithm using Differential Evolution (DE) and Genetic Algorithm (GA) is proposed in this study to address the problem of network parameters determination associated with the Nonlinear Autoregressive with eXogenous inputs Network (NARX-network). The proposed algorithm involves a two level optimization scheme to search for both optimal network architecture and weights. The DE at the upper level is formulated as combinatorial optimization to search for the network architecture while the associated network weights that minimize the prediction error is provided by the GA at the lower level. The performance of the algorithm is evaluated on identification of a laboratory rotary motion system. The system identification results show the effectiveness of the proposed algorithm for nonparametric model development.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Wang, Wei; Kreimeyer, Kory; Woo, Emily Jane; Ball, Robert; Foster, Matthew; Pandey, Abhishek; Scott, John; Botsis, Taxiarchis
2016-08-01
The sheer volume of textual information that needs to be reviewed and analyzed in many clinical settings requires the automated retrieval of key clinical and temporal information. The existing natural language processing systems are often challenged by the low quality of clinical texts and do not demonstrate the required performance. In this study, we focus on medical product safety report narratives and investigate the association of the clinical events with appropriate time information. We developed a novel algorithm for tagging and extracting temporal information from the narratives, and associating it with related events. The proposed algorithm minimizes the performance dependency on text quality by relying only on shallow syntactic information and primitive properties of the extracted event and time entities. We demonstrated the effectiveness of the proposed algorithm by evaluating its tagging and time assignment capabilities on 140 randomly selected reports from the US Vaccine Adverse Event Reporting System (VAERS) and the FDA (Food and Drug Administration) Adverse Event Reporting System (FAERS). We compared the performance of our tagger with the SUTime and HeidelTime taggers, and our algorithm's event-time associations with the Temporal Awareness and Reasoning Systems for Question Interpretation (TARSQI). We further evaluated the ability of our algorithm to correctly identify the time information for the events in the 2012 Informatics for Integrating Biology and the Bedside (i2b2) Challenge corpus. For the time tagging task, our algorithm performed better than the SUTime and the HeidelTime taggers (F-measure in VAERS and FAERS: Our algorithm: 0.86 and 0.88, SUTime: 0.77 and 0.74, and HeidelTime 0.75 and 0.42, respectively). In the event-time association task, our algorithm assigned an inappropriate timestamp for 25% of the events, while the TARSQI toolkit demonstrated a considerably lower performance, assigning inappropriate timestamps in 61.5% of the same
Bousquet, J; Anto, J M; Demoly, P; Schünemann, H J; Togias, A; Akdis, M; Auffray, C; Bachert, C; Bieber, T; Bousquet, P J; Carlsen, K H; Casale, T B; Cruz, A A; Keil, T; Lodrup Carlsen, K C; Maurer, M; Ohta, K; Papadopoulos, N G; Roman Rodriguez, M; Samolinski, B; Agache, I; Andrianarisoa, A; Ang, C S; Annesi-Maesano, I; Ballester, F; Baena-Cagnani, C E; Basagaña, X; Bateman, E D; Bel, E H; Bedbrook, A; Beghé, B; Beji, M; Ben Kheder, A; Benet, M; Bennoor, K S; Bergmann, K C; Berrissoul, F; Bindslev Jensen, C; Bleecker, E R; Bonini, S; Boner, A L; Boulet, L P; Brightling, C E; Brozek, J L; Bush, A; Busse, W W; Camargos, P A M; Canonica, G W; Carr, W; Cesario, A; Chen, Y Z; Chiriac, A M; Costa, D J; Cox, L; Custovic, A; Dahl, R; Darsow, U; Didi, T; Dolen, W K; Douagui, H; Dubakiene, R; El-Meziane, A; Fonseca, J A; Fokkens, W J; Fthenou, E; Gamkrelidze, A; Garcia-Aymerich, J; Gerth van Wijk, R; Gimeno-Santos, E; Guerra, S; Haahtela, T; Haddad, H; Hellings, P W; Hellquist-Dahl, B; Hohmann, C; Howarth, P; Hourihane, J O; Humbert, M; Jacquemin, B; Just, J; Kalayci, O; Kaliner, M A; Kauffmann, F; Kerkhof, M; Khayat, G; Koffi N'Goran, B; Kogevinas, M; Koppelman, G H; Kowalski, M L; Kull, I; Kuna, P; Larenas, D; Lavi, I; Le, L T; Lieberman, P; Lipworth, B; Mahboub, B; Makela, M J; Martin, F; Martinez, F D; Marshall, G D; Mazon, A; Melen, E; Meltzer, E O; Mihaltan, F; Mohammad, Y; Mohammadi, A; Momas, I; Morais-Almeida, M; Mullol, J; Muraro, A; Naclerio, R; Nafti, S; Namazova-Baranova, L; Nawijn, M C; Nyembue, T D; Oddie, S; O'Hehir, R E; Okamoto, Y; Orru, M P; Ozdemir, C; Ouedraogo, G S; Palkonen, S; Panzner, P; Passalacqua, G; Pawankar, R; Pigearias, B; Pin, I; Pinart, M; Pison, C; Popov, T A; Porta, D; Postma, D S; Price, D; Rabe, K F; Ratomaharo, J; Reitamo, S; Rezagui, D; Ring, J; Roberts, R; Roca, J; Rogala, B; Romano, A; Rosado-Pinto, J; Ryan, D; Sanchez-Borges, M; Scadding, G K; Sheikh, A; Simons, F E R; Siroux, V; Schmid-Grendelmeier, P D; Smit, H A; Sooronbaev, T; Stein, R T; Sterk, P J; Sunyer, J; Terreehorst, I; Toskala, E; Tremblay, Y; Valenta, R; Valeyre, D; Vandenplas, O; van Weel, C; Vassilaki, M; Varraso, R; Viegi, G; Wang, D Y; Wickman, M; Williams, D; Wöhrl, S; Wright, J; Yorgancioglu, A; Yusuf, O M; Zar, H J; Zernotti, M E; Zidarn, M; Zhong, N; Zuberbier, T
2012-01-01
Concepts of disease severity, activity, control and responsiveness to treatment are linked but different. Severity refers to the loss of function of the organs induced by the disease process or to the occurrence of severe acute exacerbations. Severity may vary over time and needs regular follow-up. Control is the degree to which therapy goals are currently met. These concepts have evolved over time for asthma in guidelines, task forces or consensus meetings. The aim of this paper is to generalize the approach of the uniform definition of severe asthma presented to WHO for chronic allergic and associated diseases (rhinitis, chronic rhinosinusitis, chronic urticaria and atopic dermatitis) in order to have a uniform definition of severity, control and risk, usable in most situations. It is based on the appropriate diagnosis, availability and accessibility of treatments, treatment responsiveness and associated factors such as comorbidities and risk factors. This uniform definition will allow a better definition of the phenotypes of severe allergic (and related) diseases for clinical practice, research (including epidemiology), public health purposes, education and the discovery of novel therapies. PMID:22382913
Hybrid UV Imager Containing Face-Up AlGaN/GaN Photodiodes
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Pain, Bedabrata
2005-01-01
A proposed hybrid ultraviolet (UV) image sensor would comprise a planar membrane array of face-up AlGaN/GaN photodiodes integrated with a complementary metal oxide/semiconductor (CMOS) readout-circuit chip. Each pixel in the hybrid image sensor would contain a UV photodiode on the AlGaN/GaN membrane, metal oxide/semiconductor field-effect transistor (MOSFET) readout circuitry on the CMOS chip underneath the photodiode, and a metal via connection between the photodiode and the readout circuitry (see figure). The proposed sensor design would offer all the advantages of comparable prior CMOS active-pixel sensors and AlGaN UV detectors while overcoming some of the limitations of prior (AlGaN/sapphire)/CMOS hybrid image sensors that have been designed and fabricated according to the methodology of flip-chip integration. AlGaN is a nearly ideal UV-detector material because its bandgap is wide and adjustable and it offers the potential to attain extremely low dark current. Integration of AlGaN with CMOS is necessary because at present there are no practical means of realizing readout circuitry in the AlGaN/GaN material system, whereas the means of realizing readout circuitry in CMOS are well established. In one variant of the flip-chip approach to integration, an AlGaN chip on a sapphire substrate is inverted (flipped) and then bump-bonded to a CMOS readout circuit chip; this variant results in poor quantum efficiency. In another variant of the flip-chip approach, an AlGaN chip on a crystalline AlN substrate would be bonded to a CMOS readout circuit chip; this variant is expected to result in narrow spectral response, which would be undesirable in many applications. Two other major disadvantages of flip-chip integration are large pixel size (a consequence of the need to devote sufficient area to each bump bond) and severe restriction on the photodetector structure. The membrane array of AlGaN/GaN photodiodes and the CMOS readout circuit for the proposed image sensor would
Genetic algorithm-based form error evaluation
NASA Astrophysics Data System (ADS)
Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng
2007-07-01
Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.
Application of Genetic Algorithms in Seismic Tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet; Papazachos, Constantinos
2010-05-01
application of hybrid genetic algorithms in seismic tomography is examined and the efficiency of least squares and genetic methods as representative of the local and global optimization, respectively, is presented and evaluated. The robustness of both optimization methods has been tested and compared for the same source-receiver geometry and characteristics of the model structure (anomalies, etc.). A set of seismic refraction synthetic (noise free) data was used for modeling. Specifically, cross-well, down-hole and typical refraction studies using 24 geophones and 5 shoots were used to confirm the applicability of the genetic algorithms in seismic tomography. To solve the forward modeling and estimate the traveltimes, the revisited ray bending method was used supplemented by an approximate computation of the first Fresnel volume. The root mean square (rms) error as the misfit function was used and calculated for the entire random velocity model for each generation. After the end of each generation and based on the misfit of the individuals (velocity models), the selection, crossover and mutation (typical process steps of genetic algorithms) were selected continuing the evolution theory and coding the new generation. To optimize the computation time, since the whole procedure is quite time consuming, the Matlab Distributed Computing Environment (MDCE) was used in a multicore engine. During the tests, we noticed that the fast convergence that the algorithm initially exhibits (first 5 generations) is followed by progressively slower improvements of the reconstructed velocity models. Thus, to improve the final tomographic models, a hybrid genetic algorithm (GA) approach was adopted by combining the GAs with a local optimization method after several generations, on the basis of the convergence of the resulting models. This approach is shown to be efficient, as it directs the solution search towards a model region close to the global minimum solution.
Liu, Shu-Yen; Sheu, J K; Lin, Yu-Chuan; Chen, Yu-Tong; Tu, S J; Lee, M L; Lai, W C
2013-11-01
Hydrogen generation through water splitting by n-InGaN working electrodes with bias generated from GaAs solar cell was studied. Instead of using an external bias provided by power supply, a GaAs-based solar cell was used as the driving force to increase the rate of hydrogen production. The water-splitting system was tuned using different approaches to set the operating points to the maximum power point of the GaAs solar cell. The approaches included changing the electrolytes, varying the light intensity, and introducing the immersed ITO ohmic contacts on the working electrodes. As a result, the hybrid system comprising both InGaN-based working electrodes and GaAs solar cells operating under concentrated illumination could possibly facilitate efficient water splitting. PMID:24514940
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Magnetoresistance Study in a GaAs/InGaAs/GaAs Delta-Doped Quantum Well
NASA Astrophysics Data System (ADS)
Hasbun, J. E.
1997-03-01
The magnetoresistance of a GaAs/Ga_0.87In_0.13As/GaAs with an electron concentration of N_s=6.3x10^11cm-2 is calculated at low temperature for a magnetic field range of 2-30 tesla and low electric field. The results obtained for the magnetotransport are compared with the experimental work of Herfort et al.(J. Herfort, K.-J. Friedland, H. Kostial, and R. Hey, Appl. Phys. Lett. V66, 23 (1995)). While the logitudinal magnetoresistance agrees reasonably well with experiment, the Hall resistance slope reflects a classical shape; however, its second derivative seems to show oscillations that are consistent with the Hall effect plateaus seen experimentally. Albeit with a much higher electron concentration, earlier calculationsfootnote J. Hasbun, APS Bull. V41, 419 (1996) for an Al_0.27Ga_0.73/GaAs /Al_0.27Ga_0.73As quantum well shows similar behavior. This work has been carried out with the use of a quantum many body approach employed in earlier work(J. Hasbun, APS Bull. V41, 1659 (1996)).
Raab, David; Graf, Marcus; Notka, Frank; Schödl, Thomas; Wagner, Ralf
2010-09-01
One of the main advantages of de novo gene synthesis is the fact that it frees the researcher from any limitations imposed by the use of natural templates. To make the most out of this opportunity, efficient algorithms are needed to calculate a coding sequence, combining different requirements, such as adapted codon usage or avoidance of restriction sites, in the best possible way. We present an algorithm where a "variation window" covering several amino acid positions slides along the coding sequence. Candidate sequences are built comprising the already optimized part of the complete sequence and all possible combinations of synonymous codons representing the amino acids within the window. The candidate sequences are assessed with a quality function, and the first codon of the best candidates' variation window is fixed. Subsequently the window is shifted by one codon position. As an example of a freely accessible software implementing the algorithm, we present the Mr. Gene web-application. Additionally two experimental applications of the algorithm are shown. PMID:21189842
Undeland, Duane K.; Kowalski, Todd J.; Berth, Wendy L.; Gundrum, Jacob D.
2010-01-01
OBJECTIVE: To assess the safety and appropriateness of antibiotic use in adult patients with pharyngitis who opted for a nurse-only triage and treatment algorithm vs patients who underwent a physician-directed clinical evaluation. PATIENTS AND METHODS: Using International Classification of Diseases, Ninth Revision codes to query the electronic medical record database at our institution, a large multispecialty health care system in LaCrosse, WI, we identified adult patients diagnosed as having pharyngitis from September 1, 2005, through August 31, 2007. Diagnosis, treatment, and outcome data were collected retrospectively. RESULTS: Of 4996 patients who sought treatment for pharyngitis, 3570 (71.5%) saw a physician and 1426 (28.5%) opted for the nurse-only triage and treatment algorithm. Physicians adhered to antibiotic-prescribing guidelines in 3310 (92.7%) of 3570 first visits, whereas nurses using the algorithm adhered to guidelines in 1422 (99.7%) of 1426 first visits (P<.001). Physicians were significantly less likely to follow guidelines at patients' subsequent visits for a single pharyngitis illness than at their initial one (92.7% [3310/3570] vs 83.7% [406/485]; P<.001). CONCLUSION: Instituting a simple nurse-only triage and treatment algorithm for patients presenting with pharyngitis appears to reduce unnecessary antibiotic use. PMID:21037044
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
Systematic investigation on topological properties of layered GaS and GaSe under strain
NASA Astrophysics Data System (ADS)
An, Wei; Wu, Feng; Jiang, Hong; Tian, Guang-Shan; Li, Xin-Zheng
2014-08-01
The topological properties of layered β-GaS and ɛ-GaSe under strain are systematically investigated by ab initio calculations with the electronic exchange-correlation interactions treated beyond the generalized gradient approximation (GGA). Based on the GW method and the Tran-Blaha modified Becke-Johnson potential approach, we find that while ɛ-GaSe can be strain-engineered to become a topological insulator, β-GaS remains a trivial one even under strong strain, which is different from the prediction based on GGA. The reliability of the fixed volume assumption rooted in nearly all the previous calculations is discussed. By comparing to strain calculations with optimized inter-layer distance, we find that the fixed volume assumption is qualitatively valid for β-GaS and ɛ-GaSe, but there are quantitative differences between the results from the fixed volume treatment and those from more realistic treatments. This work indicates that it is risky to use theoretical approaches like GGA that suffer from the band gap problem to address physical properties, including, in particular, the topological nature of band structures, for which the band gap plays a crucial role. In the latter case, careful calibration against more reliable methods like the GW approach is strongly recommended.
Systematic investigation on topological properties of layered GaS and GaSe under strain
An, Wei; Tian, Guang-Shan; Wu, Feng; Jiang, Hong; Li, Xin-Zheng
2014-08-28
The topological properties of layered β-GaS and ε-GaSe under strain are systematically investigated by ab initio calculations with the electronic exchange-correlation interactions treated beyond the generalized gradient approximation (GGA). Based on the GW method and the Tran-Blaha modified Becke-Johnson potential approach, we find that while ε-GaSe can be strain-engineered to become a topological insulator, β-GaS remains a trivial one even under strong strain, which is different from the prediction based on GGA. The reliability of the fixed volume assumption rooted in nearly all the previous calculations is discussed. By comparing to strain calculations with optimized inter-layer distance, we find that the fixed volume assumption is qualitatively valid for β-GaS and ε-GaSe, but there are quantitative differences between the results from the fixed volume treatment and those from more realistic treatments. This work indicates that it is risky to use theoretical approaches like GGA that suffer from the band gap problem to address physical properties, including, in particular, the topological nature of band structures, for which the band gap plays a crucial role. In the latter case, careful calibration against more reliable methods like the GW approach is strongly recommended.
NASA Astrophysics Data System (ADS)
Shin, Frances B.; Kil, David H.; Dobeck, Gerald J.
1997-07-01
In distributed underwater signal processing for area surveillance and sanitization during regional conflicts, it is often necessary to transmit raw imagery data to a remote processing station for detection-report confirmation and more sophisticated automatic target recognition (ATR) processing. Because of he limited bandwidth available for transmission, image compression is of paramount importance. At the same time, preservation of useful information that contains essential signal attributes is crucial for effective mine detection and classification in shallow water. In this paper, we present an integrated processing strategy that combines image compression and ATR algorithms for superior detection performance while achieving maximal bandwidth reduction. Our reduced-dimension image compression algorithm comprises image-content classification for the subimage-specific transformation, principal component analysis for further dimension reduction, and vector quantization to obtain minimal information state. Next, using an integrated pattern recognition paradigm, our ATR algorithm optimally combines low-dimensional features and an appropriate classifier topology to extract maximum recognition performance from reconstructed images. Instead of assessing performance of the image compression algorithm in terms of commonly used peak signal-to-noise ratio or normalized mean-squared error criteria, we quantify our algorithm performance using a metric that reflects human and operational factors - ATR performance. Our preliminary analysis based on high-frequency sonar real data indicates that we can achieve a compression ratio of up to 57:1 with minimal sacrifice in PD and PFA. Furthermore, we discuss the concept of the classification Cramer-Rao bound in terms of data compression, sufficient statistics, and class separability to quantify the extent to which a classifier approximates the Bayes classifier.
Mass spectrometry cancer data classification using wavelets and genetic algorithm.
Nguyen, Thanh; Nahavandi, Saeid; Creighton, Douglas; Khosravi, Abbas
2015-12-21
This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners. PMID:26611346
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
First Principles Electronic Structure of Mn doped GaAs, GaP, and GaN Semiconductors
Schulthess, Thomas C; Temmerman, Walter M; Szotek, Zdzislawa; Svane, Axel; Petit, Leon
2007-01-01
We present first-principles electronic structure calculations of Mn doped III-V semiconductors based on the local spin-density approximation (LSDA) as well as the self-interaction corrected local spin density method (SIC-LSD). We find that it is crucial to use a self-interaction free approach to properly describe the electronic ground state. The SIC-LSD calculations predict the proper electronic ground state configuration for Mn in GaAs, GaP, and GaN. Excellent quantitative agreement with experiment is found for magnetic moment and p-d exchange in (GaMn)As. These results allow us to validate commonly used models for magnetic semiconductors. Furthermore, we discuss the delicate problem of extracting binding energies of localized levels from density functional theory calculations. We propose three approaches to take into account final state effects to estimate the binding energies of the Mn-d levels in GaAs. We find good agreement between computed values and estimates from photoemisison experiments.
A Bat Algorithm with Mutation for UCAV Path Planning
Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi
2012-01-01
Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518
A bat algorithm with mutation for UCAV path planning.
Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi
2012-01-01
Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518
Crystal growth of device quality GaAs in space
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1984-01-01
The crystal growth, device processing and device related properties and phenomena of GaAs are investigated. Our GaAs research evolves about these key thrust areas. The overall program combines: (1) studies of crystal growth on novel approaches to engineering of semiconductor materials (i.e., GaAs and related compounds); (2) investigation and correlation of materials properties and electronic characteristics on a macro- and microscale; (3) investigation of electronic properties and phenomena controlling device applications and device performance. The ground based program is developed which would insure successful experimentation with and eventually processing of GaAs in a near zero gravity environment.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
Hybrid genetic approach for the dynamic weapon-target allocation problem
NASA Astrophysics Data System (ADS)
Khosla, Deepak
2001-08-01
This paper addresses the problem of threat engagement and dynamic weapon-target allocation (WTA) across the force or network-centric force optimization. The objective is to allocate and schedule defensive weapon resources over a given period of time so as to minimize surviving target value subject to resource availability and temporal constraints. The dynamic WTA problem is a NP-complete problem and belongs to a class of multiple-resource-constrained optimal scheduling problems. Inherent complexities in the problem of determining the optimal solution include limited weapon resources, time windows under which threats must be engaged, load-balancing across weapon systems, and complex interdependencies of various assignments and resources. We present a new hybrid genetic algorithm (GA) which is a combination of a traditional genetic algorithm and a simulated annealing-type algorithm for solving the dynamic WTA problem. The hybrid GA approach proposed here uses a simulated annealing-type heuristics to compute the fitness of a GA-selected population. This step also optimizes the temporal dimension (scheduling) under resource and temporal constraints. The proposed method provides schedules that are near-optimal in short cycle times and have minimal perturbation from one cycle to the next. We compare the performance of the proposed approach with a baseline WTA algorithm.
Optimal Design of Geodetic Network Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vajedian, Sanaz; Bagheri, Hosein
2010-05-01
A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied
Deep levels in virtually unstrained InGaAs layers deposited on GaAs
NASA Astrophysics Data System (ADS)
Pal, D.; Gombia, E.; Mosca, R.; Bosacchi, A.; Franchi, S.
1998-09-01
The dislocation-related deep levels in InxGa1-xAs layers grown by molecular beam epitaxy on GaAs substrates have been investigated. Virtually unstrained InGaAs layers with mole fraction x of 0.10, 0.20, and 0.30 have been obtained by properly designing the In composition of linearly graded InxGa1-xAs buffers. Two electron traps, labeled as E2 and E3, whose activation energy scales well with the energy gap, have been found. Unlike E2, E3 shows: (i) a logarithmic dependence of the deep level transient spectroscopy amplitude on the filling pulse width and (ii) an increase of concentration as the buffer/InGaAs interface is approached. These findings, together with the observation that, in compressively strained In0.2Ga0.8As, the E3-related concentration is definitely higher than that of virtually unstrained In0.2Ga0.8As, indicate that this trap is likely originated by extended defects like threading dislocations.
Magneto-Excitons in (411)A and (100)-Oriented GaAs/AlGaAs Multiple Quantum Well Structures
Bajaj, K.K.; Hiyamizu, S.; Jones, E.D.; Krivorotov, I.; Shimomura, S.; Shinohara, K.
1999-01-20
We report magneto-exciton spectroscopy studies of (411)A and (100)-oriented GaAs/Al{sub 0.3}Ga{sub 0.7}As multiquantum well structures. The samples consisted of seven GaAs quantum wells with widths varying between 0.6 and 12nm, were grown on (411)A and (100)-oriented GaAs substrates. The exciton diamagnetic energy shifts and linewidths were measured between 0 and 14T at 1.4K The dependence of the exciton diamagnetic shifts with magnetic field were calculated using a variational approach and good agreement with experiment for both substrate orientations was found.
The beam properties of high-power InGaAs/AlGaAs quantum well lasers
NASA Astrophysics Data System (ADS)
Wu, Xiang; Lu, Zukang; Wang, You; Takiguchi, Yoshihiro; Kan, Hirofumi
2003-11-01
The vertical beam quality factor of the fundamental TE propagating mode for InGaAs/AlGaAs SCH DQW lasers emitting at 940 nm is investigated by using the transfer matrix method and the non-paraxial vectorial moment theory for non-paraxial beams. An experimental approach is given for the measurement of the equivalent vertical beam quality factor of an InGaAs/AlGaAs SCH DQW laser. It has been shown that the vertical beam quality factor Mx2 is always larger than unity, whether the thickness of the active region of LDs is much smaller than the emission wavelength or not.
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-01-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased. PMID:27556534
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-01-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased. PMID:27556534
Algorithmic Processes for Increasing Design Efficiency.
ERIC Educational Resources Information Center
Terrell, William R.
1983-01-01
Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)
Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige
Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.