RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
Panniculitides, an algorithmic approach.
Zelger, B
2013-08-01
The issue of inflammatory diseases of subcutis and its mimicries is generally considered a difficult field of dermatopathology. Yet, in my experience, with appropriate biopsies and good clinicopathological correlation, a specific diagnosis of panniculitides can usually be made. Thereby, knowledge about some basic anatomic and pathological issues is essential. Anatomy differentiates within the panniculus between the fatty lobules separated by fibrous septa. Pathologically, inflammation of panniculus is defined and recognized by an inflammatory process which leads to tissue damage and necrosis. Several types of fat necrosis are observed: xanthomatized macrophages in lipophagic necrosis; granular fat necrosis and fat micropseudocysts in liquefactive fat necrosis; mummified adipocytes in "hyalinizing" fat necrosis with/without saponification and/or calcification; and lipomembranous membranes in membranous fat necrosis. In an algorithmic approach the recognition of an inflammatory process recognized by features as elaborated above is best followed in three steps: recognition of pattern, second of subpattern, and finally of presence and composition of inflammatory cells. Pattern differentiates a mostly septal or mostly lobular distribution at scanning magnification. In the subpattern category one looks for the presence or absence of vasculitis, and, if this is the case, the size and the nature of the involved blood vessel: arterioles and small arteries or veins; capillaries or postcapillary venules. The third step will be to identify the nature of the cells present in the inflammatory infiltrate and, finally, to look for additional histopathologic features that allow for a specific final diagnosis in the language of clinical dermatology of disease involving the subcutaneous fat.
A new optimized GA-RBF neural network algorithm.
Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.
A New Optimized GA-RBF Neural Network Algorithm
Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666
Ameliorated GA approach for base station planning
NASA Astrophysics Data System (ADS)
Wang, Andong; Sun, Hongyue; Wu, Xiaomin
2011-10-01
In this paper, we aim at locating base station (BS) rationally to satisfy the most customs by using the least BSs. An ameliorated GA is proposed to search for the optimum solution. In the algorithm, we mesh the area to be planned according to least overlap length derived from coverage radius, bring into isometric grid encoding method to represent BS distribution as well as its number and develop select, crossover and mutation operators to serve our unique necessity. We also construct our comprehensive object function after synthesizing coverage ratio, overlap ratio, population and geographical conditions. Finally, after importing an electronic map of the area to be planned, a recommended strategy draft would be exported correspondingly. We eventually import HongKong, China to simulate and yield a satisfactory solution.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Calibration of visual model for space manipulator with a hybrid LM-GA algorithm
NASA Astrophysics Data System (ADS)
Jiang, Wensong; Wang, Zhongyu
2016-01-01
A hybrid LM-GA algorithm is proposed to calibrate the camera system of space manipulator to improve its locational accuracy. This algorithm can dynamically fuse the Levenberg-Marqurdt (LM) algorithm and Genetic Algorithm (GA) together to minimize the error of nonlinear camera model. LM algorithm is called to optimize the initial camera parameters that are generated by genetic process previously. Iteration should be stopped if the optimized camera parameters meet the accuracy requirements. Otherwise, new populations are generated again by GA and optimized afresh by LM algorithm until the optimal solutions meet the accuracy requirements. A novel measuring machine of space manipulator is designed to on-orbit dynamic simulation and precision test. The camera system of space manipulator, calibrated by hybrid LM-GA algorithm, is used for locational precision test in this measuring instrument. The experimental results show that the mean composite errors are 0.074 mm for hybrid LM-GA camera calibration model, 1.098 mm for LM camera calibration model, and 1.202 mm for GA camera calibration model. Furthermore, the composite standard deviations are 0.103 mm for the hybrid LM-GA camera calibration model, 1.227 mm for LM camera calibration model, and 1.351 mm for GA camera calibration model. The accuracy of hybrid LM-GA camera calibration model is more than 10 times higher than that of other two methods. All in all, the hybrid LM-GA camera calibration model is superior to both the LM camera calibration model and GA camera calibration model.
Naresh-Kumar, G. Trager-Cowan, C.; Vilalta-Clemente, A.; Morales, M.; Ruterana, P.; Pandey, S.; Cavallini, A.; Cavalcoli, D.; Skuridina, D.; Vogt, P.; Kneissl, M.; Behmenburg, H.; Giesen, C.; Heuken, M.; Gamarra, P.; Di Forte-Poisson, M. A.; Patriarche, G.; Vickridge, I.
2014-12-15
We report on our multi–pronged approach to understand the structural and electrical properties of an InAl(Ga)N(33nm barrier)/Al(Ga)N(1nm interlayer)/GaN(3μm)/ AlN(100nm)/Al{sub 2}O{sub 3} high electron mobility transistor (HEMT) heterostructure grown by metal organic vapor phase epitaxy (MOVPE). In particular we reveal and discuss the role of unintentional Ga incorporation in the barrier and also in the interlayer. The observation of unintentional Ga incorporation by using energy dispersive X–ray spectroscopy analysis in a scanning transmission electron microscope is supported with results obtained for samples with a range of AlN interlayer thicknesses grown under both the showerhead as well as the horizontal type MOVPE reactors. Poisson–Schrödinger simulations show that for high Ga incorporation in the Al(Ga)N interlayer, an additional triangular well with very small depth may be exhibited in parallel to the main 2–DEG channel. The presence of this additional channel may cause parasitic conduction and severe issues in device characteristics and processing. Producing a HEMT structure with InAlGaN as the barrier and AlGaN as the interlayer with appropriate alloy composition may be a possible route to optimization, as it might be difficult to avoid Ga incorporation while continuously depositing the layers using the MOVPE growth method. Our present work shows the necessity of a multicharacterization approach to correlate structural and electrical properties to understand device structures and their performance.
Modeling human cancer-related regulatory modules by GA-RNN hybrid algorithms
Chiang, Jung-Hsien; Chao, Shih-Yi
2007-01-01
Background Modeling cancer-related regulatory modules from gene expression profiling of cancer tissues is expected to contribute to our understanding of cancer biology as well as developments of new diagnose and therapies. Several mathematical models have been used to explore the phenomena of transcriptional regulatory mechanisms in Saccharomyces cerevisiae. However, the contemplating on controlling of feed-forward and feedback loops in transcriptional regulatory mechanisms is not resolved adequately in Saccharomyces cerevisiae, nor is in human cancer cells. Results In this study, we introduce a Genetic Algorithm-Recurrent Neural Network (GA-RNN) hybrid method for finding feed-forward regulated genes when given some transcription factors to construct cancer-related regulatory modules in human cancer microarray data. This hybrid approach focuses on the construction of various kinds of regulatory modules, that is, Recurrent Neural Network has the capability of controlling feed-forward and feedback loops in regulatory modules and Genetic Algorithms provide the ability of global searching of common regulated genes. This approach unravels new feed-forward connections in regulatory models by modified multi-layer RNN architectures. We also validate our approach by demonstrating that the connections in our cancer-related regulatory modules have been most identified and verified by previously-published biological documents. Conclusion The major contribution provided by this approach is regarding the chain influences upon a set of genes sequentially. In addition, this inverse modeling correctly identifies known oncogenes and their interaction genes in a purely data-driven way. PMID:17359522
Cognitive Radio — Genetic Algorithm Approach
NASA Astrophysics Data System (ADS)
Reddy, Y. B.
2005-03-01
Cognitive Radio (CR) is relatively a new technology, which intelligently detects a particular segment of the radio spectrum currently in use and selects unused spectrum quickly without interfering the transmission of authorized users. Cognitive Radios can learn about current use of spectrum in their operating area, make intelligent decisions, and react to immediate changes in the use of spectrum by other authorized users. The goal of CR technology is to relieve radio spectrum overcrowding, which actually translates to a lack of access to full radio spectrum utilization. Due to this adaptive behavior, the CR can easily avoid the interference of signals in a crowded radio frequency spectrum. In this research, we discuss the possible application of genetic algorithms (GA) to create a CR that can respond intelligently in changing and unanticipated circumstances and in the presence of hostile jammers and interferers. Genetic algorithms are problem solving techniques based on evolution and natural selection. GA models adapt Charles Darwin's evolutionary theory for analysis of data and interchanging design elements in hundreds of thousands of different combinations. Only the best-performing combinations are permitted to survive, and those combinations "reproduce" further, progressively yielding better and better results.
Ancestral genome inference using a genetic algorithm approach.
Gao, Nan; Yang, Ning; Tang, Jijun
2013-01-01
Recent advancement of technologies has now made it routine to obtain and compare gene orders within genomes. Rearrangements of gene orders by operations such as reversal and transposition are rare events that enable researchers to reconstruct deep evolutionary histories. An important application of genome rearrangement analysis is to infer gene orders of ancestral genomes, which is valuable for identifying patterns of evolution and for modeling the evolutionary processes. Among various available methods, parsimony-based methods (including GRAPPA and MGR) are the most widely used. Since the core algorithms of these methods are solvers for the so called median problem, providing efficient and accurate median solver has attracted lots of attention in this field. The "double-cut-and-join" (DCJ) model uses the single DCJ operation to account for all genome rearrangement events. Because mathematically it is much simpler than handling events directly, parsimony methods using DCJ median solvers has better speed and accuracy. However, the DCJ median problem is NP-hard and although several exact algorithms are available, they all have great difficulties when given genomes are distant. In this paper, we present a new algorithm that combines genetic algorithm (GA) with genomic sorting to produce a new method which can solve the DCJ median problem in limited time and space, especially in large and distant datasets. Our experimental results show that this new GA-based method can find optimal or near optimal results for problems ranging from easy to very difficult. Compared to existing parsimony methods which may severely underestimate the true number of evolutionary events, the sorting-based approach can infer ancestral genomes which are much closer to their true ancestors. The code is available at http://phylo.cse.sc.edu. PMID:23658708
System engineering approach to GPM retrieval algorithms
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that
Approximation of HRPITS results for SI GaAs by large scale support vector machine algorithms
NASA Astrophysics Data System (ADS)
Jankowski, Stanisław; Wojdan, Konrad; Szymański, Zbigniew; Kozłowski, Roman
2006-10-01
For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.
The royal road for genetic algorithms: Fitness landscapes and GA performance
Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)
1991-01-01
Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.
DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.
Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles
NASA Astrophysics Data System (ADS)
Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi
2012-09-01
In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.
3D magnetic sources' framework estimation using Genetic Algorithm (GA)
NASA Astrophysics Data System (ADS)
Ponte-Neto, C. F.; Barbosa, V. C.
2008-05-01
We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate
The mGA1.0: A common LISP implementation of a messy genetic algorithm
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Kerzic, Travis
1990-01-01
Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
Approaching the Hole Mobility Limit of GaSb Nanowires.
Yang, Zai-xing; Yip, SenPo; Li, Dapan; Han, Ning; Dong, Guofa; Liang, Xiaoguang; Shu, Lei; Hung, Tak Fu; Mo, Xiaoliang; Ho, Johnny C
2015-09-22
In recent years, high-mobility GaSb nanowires have received tremendous attention for high-performance p-type transistors; however, due to the difficulty in achieving thin and uniform nanowires (NWs), there is limited report until now addressing their diameter-dependent properties and their hole mobility limit in this important one-dimensional material system, where all these are essential information for the deployment of GaSb NWs in various applications. Here, by employing the newly developed surfactant-assisted chemical vapor deposition, high-quality and uniform GaSb NWs with controllable diameters, spanning from 16 to 70 nm, are successfully prepared, enabling the direct assessment of their growth orientation and hole mobility as a function of diameter while elucidating the role of sulfur surfactant and the interplay between surface and interface energies of NWs on their electrical properties. The sulfur passivation is found to efficiently stabilize the high-energy NW sidewalls of (111) and (311) in order to yield the thin NWs (i.e., <40 nm in diameters) with the dominant growth orientations of ⟨211⟩ and ⟨110⟩, whereas the thick NWs (i.e., >40 nm in diameters) would grow along the most energy-favorable close-packed planes with the orientation of ⟨111⟩, supported by the approximate atomic models. Importantly, through the reliable control of sulfur passivation, growth orientation and surface roughness, GaSb NWs with the peak hole mobility of ∼400 cm(2)V s(-1) for the diameter of 48 nm, approaching the theoretical limit under the hole concentration of ∼2.2 × 10(18) cm(-3), can be achieved for the first time. All these indicate their promising potency for utilizations in different technological domains.
GA-fisher: A new LDA-based face recognition algorithm with selection of principal components.
Zheng, Wei-Shi; Lai, Jian-Huang; Yuen, Pong C
2005-10-01
This paper addresses the dimension reduction problem in Fisherface for face recognition. When the number of training samples is less than the image dimension (total number of pixels), the within-class scatter matrix (Sw) in Linear Discriminant Analysis (LDA) is singular, and Principal Component Analysis (PCA) is suggested to employ in Fisherface for dimension reduction of Sw so that it becomes nonsingular. The popular method is to select the largest nonzero eigenvalues and the corresponding eigenvectors for LDA. To attenuate the illumination effect, some researchers suggested removing the three eigenvectors with the largest eigenvalues and the performance is improved. However, as far as we know, there is no systematic way to determine which eigenvalues should be used. Along this line, this paper proposes a theorem to interpret why PCA can be used in LDA and an automatic and systematic method to select the eigenvectors to be used in LDA using a Genetic Algorithm (GA). A GA-PCA is then developed. It is found that some small eigenvectors should also be used as part of the basis for dimension reduction. Using the GA-PCA to reduce the dimension, a GA-Fisher method is designed and developed. Comparing with the traditional Fisherface method, the proposed GA-Fisher offers two additional advantages. First, optimal bases for dimensionality reduction are derived from GA-PCA. Second, the computational efficiency of LDA is improved by adding a whitening procedure after dimension reduction. The Face Recognition Technology (FERET) and Carnegie Mellon University Pose, Illumination, and Expression (CMU PIE) databases are used for evaluation. Experimental results show that almost 5 % improvement compared with Fisherface can be obtained, and the results are encouraging.
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
Algorithmic approach to intelligent robot mobility
Kauffman, S.
1983-05-01
This paper presents Sutherland's algorithm, plus an alternative algorithm, which allows mobile robots to move about intelligently in environments resembling the rooms and hallways in which we move around. The main hardware requirements for a robot to use the algorithms presented are mobility and an ability to sense distances with some type of non-contact scanning device. This article does not discuss the actual robot construction. The emphasis is on heuristics and algorithms. 1 reference.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach. PMID:26147468
Wang, Yan; Xi, Chengyu; Zhang, Shuai; Zhang, Wenyu; Yu, Dejian
2015-01-01
As E-government continues to develop with ever-increasing speed, the requirement to enhance traditional government systems and affairs with electronic methods that are more effective and efficient is becoming critical. As a new product of information technology, E-tendering is becoming an inevitable reality owing to its efficiency, fairness, transparency, and accountability. Thus, developing and promoting government E-tendering (GeT) is imperative. This paper presents a hybrid approach combining genetic algorithm (GA) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) to enable GeT to search for the optimal tenderer efficiently and fairly under circumstances where the attributes of the tenderers are expressed as fuzzy number intuitionistic fuzzy sets (FNIFSs). GA is applied to obtain the optimal weights of evaluation criteria of tenderers automatically. TOPSIS is employed to search for the optimal tenderer. A prototype system is built and validated with an illustrative example from GeT to verify the feasibility and availability of the proposed approach.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G
2014-08-18
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
A mild reduction phosphidation approach to nanocrystalline GaP
NASA Astrophysics Data System (ADS)
Chen, Luyang; Luo, Tao; Huang, Mingxing; Gu, Yunle; Shi, Liang; Qian, Yitai
2004-12-01
Nanocrystalline gallium phosphide (GaP) has been prepared through a reduction-phosphidation by using Ga, PCl 3 as gallium and phosphorus sources and metallic sodium as reductant at 350 °C. The XRD pattern can be indexed as cublic GaP with the lattice constant of a=5.446 Å. The TEM image shows particle-like polycrystals and flake-like single crystals. The PL spectrum exhibits one peak at 330 nm for the as-prepared nanocrystalline GaP.
NASA Astrophysics Data System (ADS)
Fisz, J. J.; Buczkowski, M.; Budziński, M. P.; Kolenderski, P.
2005-05-01
The application of genetic algorithms (GA) optimization approach supported by the first-order derivative (FOD) and Newton-Raphson (NR) methods to time-resolved polarized fluorescence spectroscopy, is discussed. It is demonstrated that the application of both methods to χ2 function reduces the number of adjustable model parameters. The combination of GA-optimizer with the FOD and NR methods improves considerably the efficiency of global analysis of kinetic and polarized fluorescence decays for solutions and organized media, including the case of excited-state processes.
Probing genetic algorithms for feature selection in comprehensive metabolic profiling approach.
Zou, Wei; Tolstikov, Vladimir V
2008-04-01
Six different clones of 1-year-old loblolly pine (Pinus taeda L.) seedlings grown under standardized conditions in a green house were used for sample preparation and further analysis. Three independent and complementary analytical techniques for metabolic profiling were applied in the present study: hydrophilic interaction chromatography (HILIC-LC/ESI-MS), reversed-phase liquid chromatography (RP-LC/ESI-MS), and gas chromatography all coupled to mass spectrometry (GC/TOF-MS). Unsupervised methods, such as principle component analysis (PCA) and clustering, and supervised methods, such as classification, were used for data mining. Genetic algorithms (GA), a multivariate approach, was probed for selection of the smallest subsets of potentially discriminative classifiers. From more than 2000 peaks found in total, small subsets were selected by GA as highly potential classifiers allowing discrimination among six investigated genotypes. Annotated GC/TOF-MS data allowed the generation of a small subset of identified metabolites. LC/ESI-MS data and small subsets require further annotation. The present study demonstrated that combination of comprehensive metabolic profiling and advanced data mining techniques provides a powerful metabolomic approach for biomarker discovery among small molecules. Utilizing GA for feature selection allowed the generation of small subsets of potent classifiers.
The fuzzy C spherical shells algorithm - A new approach
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Nasraoui, Olfa; Frigui, Hichem
1992-01-01
The fuzzy c spherical shells (FCSS) algorithm is specially designed to search for clusters that can be described by circular arcs or, more generally, by shells of hyperspheres. In this paper, a new approach to the FCSS algorithm is presented. This algorithm is computationally and implementationally simpler than other clustering algorithms that have been suggested for this purpose. An unsupervised algorithm which automatically finds the optimum number of clusters is also proposed. This algorithm can be used when the number of clusters is not known. It uses a cluster validity measure to identify good clusters, merges all compatible clusters, and eliminates spurious clusters to achieve the final result. Experimental results on several data sets are presented.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
NASA Technical Reports Server (NTRS)
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
An ab initio-based approach to the stability of GaN(0 0 0 1) surfaces under Ga-rich conditions
NASA Astrophysics Data System (ADS)
Ito, Tomonori; Akiyama, Toru; Nakamura, Kohji
2009-05-01
Structural stability of GaN(0 0 0 1) under Ga-rich conditions is systematically investigated by using our ab initio-based approach. The surface phase diagram for GaN(0 0 0 1) including (2×2) and pseudo-(1×1) is obtained as functions of temperature and Ga beam equivalent pressure by comparing chemical potentials of Ga atom in the gas phase with that on the surface. The calculated results reveal that the pseudo-(1×1) appearing below 684-973 K changes its structure to the (2×2) with Ga adatom at higher temperatures beyond 767-1078 K via the newly found (1×1) with two adlayers of Ga. These results are consistent with the stable temperature range of both the pseudo-(1×1) and (2×2) with Ga adatom obtained experimentally. Furthermore, it should be noted that the structure with another coverage of Ga adatoms between the (1×1) and (2×2)-Ga does not appear as a stable structure of GaN(0 0 0 1). Furthermore, ghost island formation observed by scanning tunneling microscopy is discussed on the basis of the phase diagram.
NASA Astrophysics Data System (ADS)
Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard
2015-01-01
In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.
NASA Astrophysics Data System (ADS)
Konak, Abdullah
2014-01-01
This article presents a network design problem with relays considering the two-edge network connectivity. The problem arises in telecommunications and logistic networks where a constraint is imposed on the distance that a commodity can travel on a route without being processed by a relay, and the survivability of the network is critical in case of a component failure. The network design problem involves selecting two-edge disjoint paths between source and destination node pairs and determining the location of relays to minimize the network design cost. The formulated problem is solved by a hybrid approach of a genetic algorithm (GA) and a Lagrangian heuristic such that the GA searches for two-edge disjoint paths for each commodity, and the Lagrangian heuristic is used to determine relays on these paths. The performance of the proposed hybrid approach is compared to the previous approaches from the literature, with promising results.
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Computational identification of human long intergenic non-coding RNAs using a GA-SVM algorithm.
Wang, Yanqiu; Li, Yang; Wang, Qi; Lv, Yingli; Wang, Shiyuan; Chen, Xi; Yu, Xuexin; Jiang, Wei; Li, Xia
2014-01-01
Long intergenic non-coding RNAs (lincRNAs) are a new type of non-coding RNAs and are closely related with the occurrence and development of diseases. In previous studies, most lincRNAs have been identified through next-generation sequencing. Because lincRNAs exhibit tissue-specific expression, the reproducibility of lincRNA discovery in different studies is very poor. In this study, not including lincRNA expression, we used the sequence, structural and protein-coding potential features as potential features to construct a classifier that can be used to distinguish lincRNAs from non-lincRNAs. The GA-SVM algorithm was performed to extract the optimized feature subset. Compared with several feature subsets, the five-fold cross validation results showed that this optimized feature subset exhibited the best performance for the identification of human lincRNAs. Moreover, the LincRNA Classifier based on Selected Features (linc-SF) was constructed by support vector machine (SVM) based on the optimized feature subset. The performance of this classifier was further evaluated by predicting lincRNAs from two independent lincRNA sets. Because the recognition rates for the two lincRNA sets were 100% and 99.8%, the linc-SF was found to be effective for the prediction of human lincRNAs.
Genetic algorithm approach to aircraft gate reassignment problem
Gu, Y.; Chung, C.A.
1999-10-01
The aircraft gate reassignment problem occurs when the departure of an incoming aircraft is delayed or a delay occurs in flight. If the delay is significant enough to delay the arrival of subsequent incoming aircraft at the assigned gate, the airline must revise the gate assignments to minimize extra delay times. This paper describes a genetic algorithm approach to solving the gate reassignment problem. By using a global search technique on quantified information, this genetic algorithm approach can efficiently find minimum extra delayed time solutions that are as effective or more effective than solutions generated by experienced gate managers.
Investigation of new approaches for InGaN growth with high indium content for CPV application
NASA Astrophysics Data System (ADS)
Arif, Muhammad; Sundaram, Suresh; Streque, Jérémy; Gmili, Youssef El; Puybaret, Renaud; Belahsene, Sofiane; Ramdane, Abderahim; Martinez, Anthony; Patriarche, Gilles; Fix, Thomas; Slaoui, Abdelillah; Voss, Paul L.; Salvestrini, Jean Paul; Ougazzaden, Abdallah
2015-09-01
We propose to use two new approaches that may overcome the issues of phase separation and high dislocation density in InGaN-based PIN solar cells. The first approach consists in the growth of a thick multi-layered InGaN/GaN absorber. The periodical insertion of the thin GaN interlayers should absorb the In excess and relieve compressive strain. The InGaN layers need to be thin enough to remain fully strained and without phase separation. The second approach consists in the growth of InGaN nano-structures for the achievement of high In content thick InGaN layers. It allows the elimination of the preexisting dislocations in the underlying template. It also allows strain relaxation of InGaN layers without any dislocations, leading to higher In incorporation and reduced piezo-electric effect. The two approaches lead to structural, morphological, and luminescence properties that are significantly improved when compared to those of thick InGaN layers. Corresponding full PIN structures have been realized by growing a p-type GaN layer on the top the half PIN structures. External quantum efficiency, electro-luminescence, and photo-current characterizations have been carried out on the different structures and reveal an enhancement of the performances of the InGaN PIN PV cells when the thick InGaN layer is replaced by either InGaN/GaN multi-layered or InGaN nanorod layer.
Investigation of new approaches for InGaN growth with high indium content for CPV application
Arif, Muhammad; Salvestrini, Jean Paul; Sundaram, Suresh; Streque, Jérémy; Gmili, Youssef El; Puybaret, Renaud; Voss, Paul L.; Belahsene, Sofiane; Ramdane, Abderahim; Martinez, Anthony; Patriarche, Gilles; Fix, Thomas; Slaoui, Abdelillah; Ougazzaden, Abdallah
2015-09-28
We propose to use two new approaches that may overcome the issues of phase separation and high dislocation density in InGaN-based PIN solar cells. The first approach consists in the growth of a thick multi-layered InGaN/GaN absorber. The periodical insertion of the thin GaN interlayers should absorb the In excess and relieve compressive strain. The InGaN layers need to be thin enough to remain fully strained and without phase separation. The second approach consists in the growth of InGaN nano-structures for the achievement of high In content thick InGaN layers. It allows the elimination of the preexisting dislocations in the underlying template. It also allows strain relaxation of InGaN layers without any dislocations, leading to higher In incorporation and reduced piezo-electric effect. The two approaches lead to structural, morphological, and luminescence properties that are significantly improved when compared to those of thick InGaN layers. Corresponding full PIN structures have been realized by growing a p-type GaN layer on the top the half PIN structures. External quantum efficiency, electro-luminescence, and photo-current characterizations have been carried out on the different structures and reveal an enhancement of the performances of the InGaN PIN PV cells when the thick InGaN layer is replaced by either InGaN/GaN multi-layered or InGaN nanorod layer.
Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.
ERIC Educational Resources Information Center
Halavin, James; Sommer, Charles
Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…
Moghri, Mehdi; Madic, Milos; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud
2014-01-01
During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636
Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.
Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K
2010-03-21
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (p<0.05) by the two-step algorithm than by the one-step for 63% of all possible operating points. While operating at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the false positive rate was reduced by 24.4% (95% confidence intervals 17.9-31.0%) or 45.8% (95% confidence intervals 40.1-51.0%) respectively. We demonstrated that, with a proper experimental design, the Pareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.
Library support for problem-based learning: an algorithmic approach.
Ispahany, Nighat; Torraca, Kathren; Chilov, Marina; Zimbler, Elaine R; Matsoukas, Konstantina; Allen, Tracy Y
2007-01-01
Academic health sciences libraries can take various approaches to support the problem-based learning component of the curriculum. This article presents one such approach taken to integrate information navigation skills into the small group discussion part of the Pathophysiology course in the second year of the Dental school curriculum. Along with presenting general resources for the course, the Library Toolkit introduced an algorithmic approach to finding answers to sample clinical case questions. While elements of Evidence-Based Practice were introduced, the emphasis was on teaching students to navigate relevant resources and apply various database search techniques to find answers to the clinical problems presented.
Stall Recovery Guidance Algorithms Based on Constrained Control Approaches
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana
2016-01-01
Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling. PMID:18255838
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
A genetic algorithm approach for assessing soil liquefaction potential based on reliability method
NASA Astrophysics Data System (ADS)
Bagheripour, M. H.; Shooshpasha, I.; Afzalirad, M.
2012-02-01
Deterministic approaches are unable to account for the variations in soil's strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction ( P L ), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates P L and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.
A genetic algorithmic approach to antenna null-steering using a cluster computer.
NASA Astrophysics Data System (ADS)
Recine, Greg; Cui, Hong-Liang
2001-06-01
We apply a genetic algorithm (GA) to the problem of electronically steering the maximums and nulls of an antenna array to desired positions (null toward enemy listener/jammer, max toward friendly listener/transmitter). The antenna pattern itself is computed using NEC2 which is called by the main GA program. Since a GA naturally lends itself to parallelization, this simulation was applied to our new twin 64-node cluster computers (Gemini). Design issues and uses of the Gemini cluster in our group are also discussed.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Fabrication of normally-off GaN nanowire gate-all-around FET with top-down approach
NASA Astrophysics Data System (ADS)
Im, Ki-Sik; Won, Chul-Ho; Vodapally, Sindhuri; Caulmilone, Raphaël; Cristoloveanu, Sorin; Kim, Yong-Tae; Lee, Jung-Hee
2016-10-01
Lateral GaN nanowire gate-all-around transistor has been fabricated with top-down process and characterized. A triangle-shaped GaN nanowire with 56 nm width was implemented on the GaN-on-insulator (GaNOI) wafer by utilizing (i) buried oxide as sacrificial layer and (ii) anisotropic lateral wet etching of GaN in tetramethylammonium hydroxide solution. During subsequent GaN and AlGaN epitaxy of source/drain planar regions, no growth occurred on the nanowire, due to self-limiting growth property. Transmission electron microscopy and energy-dispersive X-ray spectroscopy elemental mapping reveal that the GaN nanowire consists of only Ga and N atoms. The transistor exhibits normally-off operation with the threshold voltage of 3.5 V and promising performance: the maximum drain current of 0.11 mA, the maximum transconductance of 0.04 mS, the record off-state leakage current of ˜10-13 A/mm, and a very high Ion/Ioff ratio of 108. The proposed top-down device concept using the GaNOI wafer enables the fabrication of multiple parallel nanowires with positive threshold voltage and is advantageous compared with the bottom-up approach.
A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface
Glueck, P.R.; Bahrami, K.A.
1995-12-31
The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar array short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.
NASA Astrophysics Data System (ADS)
Jamshidi, Saeid; Boozarjomehry, Ramin Bozorgmehry; Pishvaie, Mahmoud Reza
2009-10-01
In pore network modeling, the void space of a rock sample is represented at the microscopic scale by a network of pores connected by throats. Construction of a reasonable representation of the geometry and topology of the pore space will lead to a reliable prediction of the properties of porous media. Recently, the theory of multi-cellular growth (or L-systems) has been used as a flexible tool for generation of pore network models which do not require any special information such as 2D SEM or 3D pore space images. In general, the networks generated by this method are irregular pore network models which are inherently closer to the complicated nature of the porous media rather than regular lattice networks. In this approach, the construction process is controlled only by the production rules that govern the development process of the network. In this study, genetic algorithm has been used to obtain the optimum values of the uncertain parameters of these production rules to build an appropriate irregular lattice network capable of the prediction of both static and hydraulic information of the target porous medium.
Peng, Bin; Liu, Ke-ling; Li, Zhi-min; Wang, Yue-song; Huang, Tu-jiang
2002-06-01
Genetic algorithm (GA) is used in automatic qualitative analysis by a sequential inductively coupled plasma spectrometer (ICP-AES) and a computer program is developed in this paper. No any standard samples are needed, and spectroscopic interferences can be eliminated. All elements and their concentration ranges of an unknown sample can be reported. The replication rate Pr, crossover rate Pc, and mutation rate of the genetic algorithm were adjusted to be 0.6, 0.4 and 0 respectively. The analytical results of GA are in good agreement with the reference values. It indicates that, combined with the intensity information, the GA can be applied to spectroscopic qualitative analysis and expected to become an effective method in qualitative analysis in ICP-AES after further work. PMID:12938334
NASA Astrophysics Data System (ADS)
Hung, Ching-Wen; Chang, Ching-Hong; Chen, Wei-Cheng; Chen, Chun-Chia; Chen, Huey-Ing; Tsai, Yu-Ting; Tsai, Jung-Hui; Liu, Wen-Chau
2016-10-01
Based on an electrophoretic deposition (EPD)-gate approach, a Pt/AlGaN/GaN heterostructure field-effect transistor (HFET) is fabricated and investigated at higher temperatures. The Pt/AlGaN interface with nearly oxide-free is verified by an Auger Electron Spectroscopy (AES) depth profile for the studied EPD-HFET. This result substantially enhances device performance at room temperature (300 K). Experimentally, the studied EPD-HFET exhibits a high turn-on voltage, a well suppression on gate leakage, a superior maximum drain saturation current, and an excellent extrinsic transconductance. Moreover, the microwave performance of an EPD-HFET is demonstrated at room temperature. Consequentially, this EPD-gate approach gives a promise for high-performance electronic applications.
Periprosthetic joint infection: the algorithmic approach and emerging evidence.
Parvizi, Javad; Heller, Snir; Berend, Keith R; Della Valle, Craig J; Springer, Bryan D
2015-01-01
Periprosthetic joint infections (PJIs) continue to affect patients, result in accelerated mortality, and consume approximately $1 billion of annual healthcare resources. The future of otherwise successful total joint arthroplasties can be jeopardized by PJI. In recent years, the issue of hospital-acquired infections has gained increasing attention in the United States and the rest of the world, and numerous efforts are being made to address this problem. The orthopaedic community continues to partner with societies, professional organizations, and industry to address this challenge. Recently, an international group of more than 300 surgical experts produced a 350-page document that outlines some of the best practices and identifies the evidence gap related to the management of PJIs. The document, using an algorithmic approach, outlines effective strategies for the prevention, diagnosis, and surgical management of PJIs. It is anticipated that the application of this algorithmic approach will lead to a reduction in the incidence of PJIs, will allow clinicians to diagnose PJI effectively and expeditiously, and will improve the outcome of patients affected by PJIs. PMID:25745894
NASA Astrophysics Data System (ADS)
Panwar, Ravi; Agarwala, Vijaya; Singh, Dharmendra
2014-10-01
The bandwidth-thickness tradeoff of single layer microwave wave absorber has become challenge for researchers. This paper presents experimental results of thin broadband multilayer microwave wave absorbing structures using magnetic ceramic based nano-composites for absorption at X-band. A genetic algorithm (GA) based approach has been used to optimize thickness of different material layers and selection of suitable material to ensure minimum reflection. The parameters optimized through genetic algorithm have been simulated through Ansoft High Frequency structural simulator (HFSS) and experimentally verified through Absorption Testing device (ATD). It has been found that the peak value of reflection loss is -24.53 dB for 1.3 mm absorber layer coating thickness, which shows the effectiveness of absorber for various applications..
Regionalization by fuzzy expert system based approach optimized by genetic algorithm
NASA Astrophysics Data System (ADS)
Chavoshi, Sattar; Azmin Sulaiman, Wan Nor; Saghafian, Bahram; Bin Sulaiman, Md. Nasir; Manaf, Latifah Abd
2013-04-01
SummaryIn recent years soft computing methods are being increasingly used to model complex hydrologic processes. These methods can simulate the real life processes without prior knowledge of the exact relationship between their components. The principal aim of this paper is perform hydrological regionalization based on soft computing concepts in the southern strip of the Caspian Sea basin, north of Iran. The basin with an area of 42,400 sq. km has been affected by severe floods in recent years that caused damages to human life and properties. Although some 61 hydrometric stations and 31 weather stations with 44 years of observed data (1961-2005) are operated in the study area, previous flood studies in this region have been hampered by insufficient and/or reliable observed rainfall-runoff records. In order to investigate the homogeneity (h) of catchments and overcome incompatibility that may occur on boundaries of cluster groups, a fuzzy expert system (FES) approach is used which incorporates physical and climatic characteristics, as well as flood seasonality and geographic location. Genetic algorithm (GA) was employed to adjust parameters of FES and optimize the system. In order to achieve the objective, a MATLAB programming code was developed which considers the heterogeneity criteria of less than 1 (H < 1) as the satisfying criteria. The adopted approach was found superior to the conventional hydrologic regionalization methods in the region because it employs greater number of homogeneity parameters and produces lower values of heterogeneity criteria.
Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam
2016-01-01
Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors. A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r2, concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774
Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam
2016-01-01
Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors . A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r(2), concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-09-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-07-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
Interior search algorithm (ISA): a novel approach for global optimization.
Gandomi, Amir H
2014-07-01
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.
Evaluation of an algorithmic approach to pediatric back pain.
Feldman, David S; Straight, Joseph J; Badra, Mohammad I; Mohaideen, Ahamed; Madan, Sanjeev S
2006-01-01
Pediatric patients require a systematic approach to treating back pain that minimizes the number of diagnostic studies without missing specific diagnoses. This study reviews an algorithm for the evaluation of pediatric back pain and assesses critical factors in the history and physical examination that are predictive of specific diagnoses. Eighty-seven pediatric patients with thoracic and/or lumbar back pain were treated utilizing after this algorithm. If initial plain radiographs were positive, patients were considered to have a specific diagnosis. If negative, patients with constant pain, night pain, radicular pain, and/or an abnormal neurological examination obtained a follow-up magnetic resonance imaging. Patients with negative radiographs and intermittent pain were diagnosed with nonspecific back pain. Twenty-one (24%) of 87 patients had positive radiographs and were treated for their specific diagnoses. Nineteen (29%) of 66 patients with negative radiographs had constant pain, night pain, radicular pain, and/or an abnormal neurological examination. Ten of these 19 patients had a specific diagnosis determined by magnetic resonance imaging. Therefore, 31 (36%) of 87 patients had a specific diagnosis. Back pain of other 56 patients was of a nonspecific nature. No specific diagnoses were missed at latest follow-up. Specificity for determining a specific diagnosis was very high for radicular pain (100%), abnormal neurological examination (100%), and night pain (95%). Radicular pain and an abnormal neurological examination also had high positive predictive value (100%). Lumbar pain was the most sensitive (67%) and had the highest negative predictive value (75%). This algorithm seems to be an effective tool for diagnosing pediatric back pain, and this should help to reduce costs and patient/family anxiety and to avoid unnecessary radiation exposure.
NASA Astrophysics Data System (ADS)
Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic
2014-05-01
Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Approach to complex upper extremity injury: an algorithm.
Ng, Zhi Yang; Askari, Morad; Chim, Harvey
2015-02-01
Patients with complex upper extremity injuries represent a unique subset of the trauma population. In addition to extensive soft tissue defects affecting the skin, bone, muscles and tendons, or the neurovasculature in various combinations, there is usually concomitant involvement of other body areas and organ systems with the potential for systemic compromise due to the underlying mechanism of injury and resultant sequelae. In turn, this has a direct impact on the definitive reconstructive plan. Accurate assessment and expedient treatment is thus necessary to achieve optimal surgical outcomes with the primary goal of limb salvage and functional restoration. Nonetheless, the characteristics of these injuries places such patients at an increased risk of complications ranging from limb ischemia, recalcitrant infections, failure of bony union, intractable pain, and most devastatingly, limb amputation. In this article, the authors present an algorithmic approach toward complex injuries of the upper extremity with due consideration for the various reconstructive modalities and timing of definitive wound closure for the best possible clinical outcomes. PMID:25685098
Genetic algorithm approach for adaptive power and subcarrier allocation in multi-user OFDM systems
NASA Astrophysics Data System (ADS)
Reddy, Y. B.; Naraghi-Pour, Mort
2007-04-01
In this paper, a novel genetic algorithm application is proposed for adaptive power and subcarrier allocation in multi-user Orthogonal Frequency Division Multiplexing (OFDM) systems. To test the application, a simple genetic algorithm was implemented in MATLAB language. With the goal of minimizing the overall transmit power while ensuring the fulfillment of each user's rate and bit error rate (BER) requirements, the proposed algorithm acquires the needed allocation through genetic search. The simulations were tested for BER 0.1 to 0.00001, data rate of 256 bit per OFDM block and chromosome length of 128. The results show that genetic algorithm outperforms the results in [3] in subcarrier allocation. The convergence of GA model with 8 users and 128 subcarriers performs better in power requirement compared to that in [4] but converges more slowly.
A Functional Programming Approach to AI Search Algorithms
ERIC Educational Resources Information Center
Panovics, Janos
2012-01-01
The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…
Romero, Eduardo; Martínez, Alfonso; Oteo, Marta; García, Angel; Morcillo, Miguel Angel
2016-01-01
(68)Ga-DOTA-peptides are a promising PET radiotracers used in the detection of different tumours types due to their ability for binding specifically receptors overexpressed in these. Furthermore, (68)Ga can be produced by a (68)Ge/(68)Ga generator on site which is a very good alternative to cyclotron-based PET isotopes. Here, we describe a manual labelling approach for the synthesis of (68)Ga-labelled DOTA-peptides based on concentration and purification of the commercial (68)Ga/(68)Ga generator eluate using an anion exchange-cartridge. (68)Ga-DOTA-TATE was used to image a pheochromocytoma xenograft mouse model by a microPET/CT scanner. The method described provides satisfactory results, allowing the subsequent (68)Ga use to label DOTA-peptides. The simplicity of the method along with its implementation reduced cost, makes it useful in preclinical PET studies. PMID:26492321
Romero, Eduardo; Martínez, Alfonso; Oteo, Marta; García, Angel; Morcillo, Miguel Angel
2016-01-01
(68)Ga-DOTA-peptides are a promising PET radiotracers used in the detection of different tumours types due to their ability for binding specifically receptors overexpressed in these. Furthermore, (68)Ga can be produced by a (68)Ge/(68)Ga generator on site which is a very good alternative to cyclotron-based PET isotopes. Here, we describe a manual labelling approach for the synthesis of (68)Ga-labelled DOTA-peptides based on concentration and purification of the commercial (68)Ga/(68)Ga generator eluate using an anion exchange-cartridge. (68)Ga-DOTA-TATE was used to image a pheochromocytoma xenograft mouse model by a microPET/CT scanner. The method described provides satisfactory results, allowing the subsequent (68)Ga use to label DOTA-peptides. The simplicity of the method along with its implementation reduced cost, makes it useful in preclinical PET studies.
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
Flower pollination algorithm: A novel approach for multiobjective optimization
NASA Astrophysics Data System (ADS)
Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi
2014-09-01
Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.
An Effective GA-Based Scheduling Algorithm for FlexRay Systems
NASA Astrophysics Data System (ADS)
Ding, Shan; Tomiyama, Hiroyuki; Takada, Hiroaki
An advanced communication system, the FlexRay system, has been developed for future automotive applications. It consists of time-triggered clusters, such as drive-by-wire in cars, in order to meet different requirements and constraints between various sensors, processors, and actuators. In this paper, an approach to static scheduling for FlexRay systems is proposed. Our experimental results show that the proposed scheduling method significantly reduces up to 36.3% of the network traffic compared with a past approach.
Mobile transporter path planning using a genetic algorithm approach
NASA Technical Reports Server (NTRS)
Baffes, Paul; Wang, Lui
1988-01-01
The use of an optimization technique known as a genetic algorithm for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the Space Station which must be able to reach any point of the structure autonomously. Specific elements of the genetic algorithm are explored in both a theoretical and experimental sense. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research. However, trajectory planning problems are common in space systems and the genetic algorithm provides an attractive alternative to the classical techniques used to solve these problems.
Mobile Transporter Path Planning Using A Genetic Algorithm Approach
NASA Astrophysics Data System (ADS)
Baffes, Paul; Wang, Lui
1988-10-01
The use of an optimization technique known as a genetic algorithm for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Specific elements of the genetic algorithm are explored in both a theoretical and experimental sense. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research. However, trajectory planning problems are common in space systems and the genetic algorithm provides an attractive alternative to the classical techniques used to solve these problems.
Zhuang, Weibing; Gao, Zhihong; Zhang, Zhen
2013-01-01
Hormones are closely associated with dormancy in deciduous fruit trees, and gibberellins (GAs) are known to be particularly important. In this study, we observed that GA4 treatment led to earlier bud break in Japanese apricot. To understand better the promoting effect of GA4 on the dormancy release of Japanese apricot flower buds, proteomic and transcriptomic approaches were used to analyse the mechanisms of dormancy release following GA4 treatment, based on two-dimensional gel electrophoresis (2-DE) and digital gene expression (DGE) profiling, respectively. More than 600 highly reproducible protein spots (P<0.05) were detected and, following GA4 treatment, 38 protein spots showed more than a 2-fold difference in expression, and 32 protein spots were confidently identified according to the databases. Compared with water treatment, many proteins that were associated with energy metabolism and oxidation–reduction showed significant changes after GA4 treatment, which might promote dormancy release. We observed that genes at the mRNA level associated with energy metabolism and oxidation–reduction also played an important role in this process. Analysis of the functions of the identified proteins and genes and the related metabolic pathways would provide a comprehensive proteomic and transcriptomic view of the coordination of dormancy release after GA4 treatment in Japanese apricot flower buds. PMID:24014872
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
A Genetic Algorithm Approach for the TV Self-Promotion Assignment Problem
NASA Astrophysics Data System (ADS)
Pereira, Paulo A.; Fontes, Fernando A. C. C.; Fontes, Dalila B. M. M.
2009-09-01
We report on the development of a Genetic Algorithm (GA), which has been integrated into a Decision Support System to plan the best assignment of the weekly self-promotion space for a TV station. The problem addressed consists on deciding which shows to advertise and when such that the number of viewers, of an intended group or target, is maximized. The GA proposed incorporates a greedy heuristic to find good initial solutions. These solutions, as well as the solutions later obtained through the use of the GA, go then through a repair procedure. This is used with two objectives, which are addressed in turn. Firstly, it checks the solution feasibility and if unfeasible it is fixed by removing some shows. Secondly, it tries to improve the solution by adding some extra shows. Since the problem faced by the commercial TV station is too big and has too many features it cannot be solved exactly. Therefore, in order to test the quality of the solutions provided by the proposed GA we have randomly generated some smaller problem instances. For these problems we have obtained solutions on average within 1% of the optimal solution value.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
A genetic algorithm approach in interface and surface structure optimization
Zhang, Jian
2010-01-01
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.
NASA Astrophysics Data System (ADS)
Anderson, Richard P.
An algorithm for precision approach guidance using GPS and a MicroElectroMechanical Systems/Inertial Navigation System (MEMS/INS) has been developed to meet the Required Navigational Performance (RNP) at a cost that is suitable for General Aviation (GA) applications. This scheme allows for accurate approach guidance (Category I) using Wide Area Augmentation System (WAAS) at locations not served by ILS, MLS or other types of precision landing guidance, thereby greatly expanding the number of useable airports in poor weather. At locations served by a Local Area Augmentation System (LAAS), Category III-like navigation is possible with the novel idea of a Missed Approach Time (MAT) that is similar to a Missed Approach Point (MAP) but not fixed in space. Though certain augmented types of GPS have sufficient precision for approach navigation, its use alone is insufficient to meet RNP due to an inability to monitor loss, degradation or intentional spoofing and meaconing of the GPS signal. A redundant navigation system and a health monitoring system must be added to acquire sufficient reliability, safety and time-to-alert as stated by required navigation performance. An inertial navigation system is the best choice, as it requires no external radio signals and its errors are complementary to GPS. An aiding Kalman filter is used to derive parameters that monitor the correlation between the GPS and MEMS/INS. These approach guidance parameters determines the MAT for a given RNP and provide the pilot or autopilot with proceed/do-not-proceed decision in real time. The enabling technology used to derive the guidance program is a MEMS gyroscope and accelerometer package in conjunction with a single-antenna pseudo-attitude algorithm. To be viable for most GA applications, the hardware must be reasonably priced. The MEMS gyros allows for the first cost-effective INS package to be developed. With lower cost, however, comes higher drift rates and a more dependence on GPS aiding. In
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
A Genetic Algorithm Variational Approach to Data Assimilation and Application to Volcanic Emissions
NASA Astrophysics Data System (ADS)
Schmehl, Kerrie J.; Haupt, Sue Ellen; Pavolonis, Michael J.
2012-03-01
Variational data assimilation methods optimize the match between an observed and a predicted field. These methods normally require information on error variances of both the analysis and the observations, which are sometimes difficult to obtain for transport and dispersion problems. Here, the variational problem is set up as a minimization problem that directly minimizes the root mean squared error of the difference between the observations and the prediction. In the context of atmospheric transport and dispersion, the solution of this optimization problem requires a robust technique. A genetic algorithm (GA) is used here for that solution, forming the GA-Variational (GA-Var) technique. The philosophy and formulation of the technique is described here. An advantage of the technique includes that it does not require observation or analysis error covariances nor information about any variables that are not directly assimilated. It can be employed in the context of either a forward assimilation problem or used to retrieve unknown source or meteorological information by solving the inverse problem. The details of the method are reviewed. As an example application, GA-Var is demonstrated for predicting the plume from a volcanic eruption. First the technique is employed to retrieve the unknown emission rate and the steering winds of the volcanic plume. Then that information is assimilated into a forward prediction of its transport and dispersion. Concentration data are derived from satellite data to determine the observed ash concentrations. A case study is made of the March 2009 eruption of Mount Redoubt in Alaska. The GA-Var technique is able to determine a wind speed and direction that matches the observations well and a reasonable emission rate.
Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling
2013-11-01
Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-01-01
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm; as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates. PMID:27101930
Wang, Wenliang; Wang, Haiyan; Yang, Weijia; Zhu, Yunnong; Li, Guoqiang
2016-04-22
High-quality GaN epitaxial films have been grown on Si substrates with Al buffer layer by the combination of molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) technologies. MBE is used to grow Al buffer layer at first, and then PLD is deployed to grow GaN epitaxial films on the Al buffer layer. The surface morphology, crystalline quality, and interfacial property of as-grown GaN epitaxial films on Si substrates are studied systematically. The as-grown ~300 nm-thick GaN epitaxial films grown at 850 °C with ~30 nm-thick Al buffer layer on Si substrates show high crystalline quality with the full-width at half-maximum (FWHM) for GaN(0002) and GaN(102) X-ray rocking curves of 0.45° and 0.61°, respectively; very flat GaN surface with the root-mean-square surface roughness of 2.5 nm; as well as the sharp and abrupt GaN/AlGaN/Al/Si hetero-interfaces. Furthermore, the corresponding growth mechanism of GaN epitaxial films grown on Si substrates with Al buffer layer by the combination of MBE and PLD is hence studied in depth. This work provides a novel and simple approach for the epitaxial growth of high-quality GaN epitaxial films on Si substrates.
NASA Technical Reports Server (NTRS)
Li, C.-J.; Sun, Q.; Lagowski, J.; Gatos, H. C.
1985-01-01
The microscale characterization of electronic defects in (SI) GaAs has been a challenging issue in connection with materials problems encountered in GaAs IC technology. The main obstacle which limits the applicability of high resolution electron beam methods such as Electron Beam-Induced Current (EBIC) and cathodoluminescence (CL) is the low concentration of free carriers in semiinsulating (SI) GaAs. The present paper provides a new photo-EBIC characterization approach which combines the spectroscopic advantages of optical methods with the high spatial resolution and scanning capability of EBIC. A scanning electron microscope modified for electronic characterization studies is shown schematically. The instrument can operate in the standard SEM mode, in the EBIC modes (including photo-EBIC and thermally stimulated EBIC /TS-EBIC/), and in the cathodo-luminescence (CL) and scanning modes. Attention is given to the use of CL, Photo-EBIC, and TS-EBIC techniques.
Electron mobilities approaching bulk limits in "surface-free" GaAs nanowires.
Joyce, Hannah J; Parkinson, Patrick; Jiang, Nian; Docherty, Callum J; Gao, Qiang; Tan, H Hoe; Jagadish, Chennupati; Herz, Laura M; Johnston, Michael B
2014-10-01
Achieving bulk-like charge carrier mobilities in semiconductor nanowires is a major challenge facing the development of nanowire-based electronic devices. Here we demonstrate that engineering the GaAs nanowire surface by overcoating with optimized AlGaAs shells is an effective means of obtaining exceptionally high carrier mobilities and lifetimes. We performed measurements of GaAs/AlGaAs core-shell nanowires using optical pump-terahertz probe spectroscopy: a noncontact and accurate probe of carrier transport on ultrafast time scales. The carrier lifetimes and mobilities both improved significantly with increasing AlGaAs shell thickness. Remarkably, optimized GaAs/AlGaAs core-shell nanowires exhibited electron mobilities up to 3000 cm(2) V(-1) s(-1), reaching over 65% of the electron mobility typical of high quality undoped bulk GaAs at equivalent photoexcited carrier densities. This points to the high interface quality and the very low levels of ionized impurities and lattice defects in these nanowires. The improvements in mobility were concomitant with drastic improvements in photoconductivity lifetime, reaching 1.6 ns. Comparison of photoconductivity and photoluminescence dynamics indicates that midgap GaAs surface states, and consequently surface band-bending and depletion, are effectively eliminated in these high quality heterostructures.
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
NASA Technical Reports Server (NTRS)
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach
NASA Astrophysics Data System (ADS)
Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.
2015-12-01
A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.
Teaching Algorithm Efficiency at CS1 Level: A Different Approach
ERIC Educational Resources Information Center
Gal-Ezer, Judith; Vilner, Tamar; Zur, Ela
2004-01-01
Realizing the importance of teaching efficiency at early stages of the program of study in computer science (CS) on one hand, and the difficulties encountered when introducing this concept on the other, we advocate a different didactic approach in the introductory CS course (CS1). This paper describes the approach as it is used at the Open…
Random matrix approach to quantum adiabatic evolution algorithms
Boulatov, A.; Smelyanskiy, V.N.
2005-05-15
We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.
GA-ANFIS Expert System Prototype for Prediction of Dermatological Diseases.
Begic Fazlic, Lejla; Avdagic, Korana; Omanovic, Samir
2015-01-01
This paper presents novel GA-ANFIS expert system prototype for dermatological disease detection by using dermatological features and diagnoses collected in real conditions. Nine dermatological features are used as inputs to classifiers that are based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. After that, they are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validation of the novel GA-ANFIS system approach is performed in MATLAB environment by using validation set of data. Some conclusions concerning the impacts of features on the detection of dermatological diseases were obtained through analysis of the GA-ANFIS. We compared GA-ANFIS and ANFIS results. The results confirmed that the proposed GA-ANFIS model achieved accuracy rates which are higher than the ones we got by ANFIS model. PMID:25991223
NASA Astrophysics Data System (ADS)
Abdulsattar, Mudar Ahmed
2016-05-01
Wurtzite nanocrystals of gallium nitride are approached using wurtzoid molecular building blocks. Structural and vibrational properties are investigated for both bare and hydrogen passivated GaN molecules and small nanocrystals. Wurtzoids are bundles of capped (3, 0) nanotubes that form the wurtzite phase when they reach nanocrystal or bulk sizes. Results show that experimental bulk gap is generally confined between bare and H passivated wurtzoids. Structural parameters such as bond lengths and bond angles are in good agreement with experimental bulk values. Results of longitudinal optical (LO) vibrational frequencies of present molecules are red shifted with respect to experimental bulk in agreement with previous studies for other materials. Presently modeled GaN wurtzite nanocrystals and molecules are found suitable for the description of hydrogen sensing in ambient conditions in agreement with experimental findings. N sites in GaN wurtzoid are found responsible for the detection of hydrogen molecules. The Ga sites are found to be either oxidized or permanently connected via van der Waals' forces to nitrogen or hydrogen molecules.
Effective and efficient optics inspection approach using machine learning algorithms
Abdulla, G; Kegelmeyer, L; Liao, Z; Carr, W
2010-11-02
The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is 'truthed' or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called 'Avatar Tools' is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
Scheduling language and algorithm development study. Appendix: Study approach and activity summary
NASA Technical Reports Server (NTRS)
1974-01-01
The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.
NASA Astrophysics Data System (ADS)
Slipchenko, Sergey; Podoskin, Alexsandr; Rozhkov, Alexsandr; Pikhtin, Nikita; Tarasov, Il`ya; Bagaev, Timur; Ladugin, Maxim; Marmalyuk, Alexsandr; Padalitsa, Anatolii; Simakov, Vladimir
2015-03-01
A new approach to generation of high optical peak power by epitaxially and functionally integrated high-speed highpower current switch and laser heterostructure (so-called laser-thyristor) has been developed. This approach makes it possible to reduce the loss in external electrical connections, which is particularly important for the short-pulse highamplitude current pumping. In addition, it considerably simplifies the fabrication technology of pulsed laser sources as a commercial product and allows stacking of multiple-element systems. The epitaxially integrated AlGaAs/GaAs heterostructure of low-voltage laser-thyristor has been studied and optimized for generation of high-power pulses at a 900-nm wavelength. It is shown that the incomplete switch-on of the laserthyristor in the initial stage and the nonlinear dynamics of the emitted laser power are due to the insufficient efficiency of the vertical optical feedback in the epitaxially integrated heterostructure. Optimization of the composition and the interband absorption spectra of transistor base layers makes it possible to substantially raise the efficiency of control signals due to the rise in the photogeneration speed. Experimental laser-thyristor samples with a 200-μm aperture have been fabricated and studied. The maximum static blocking voltage does not exceed 20 V. It is shown that the generated laser pulses have a perfect bell-like shape without any indications of a nonlinear dynamics. This confirms that the changes introduced into the heterostructure design provide a sufficient efficiency of photogeneration of the control signal. As a result, the maximum optical peak power reaches 40 and 8 W at FWHM pulse durations of 95 and 13 ns, respectively. An analysis of the potential dynamics has shown that the heterostructure provides pumping of the active layer with up to 90-A pulses.
Review of tandem repeat search tools: a systematic approach to evaluating algorithmic performance.
Lim, Kian Guan; Kwoh, Chee Keong; Hsu, Li Yang; Wirawan, Adrianto
2013-01-01
The prevalence of tandem repeats in eukaryotic genomes and their association with a number of genetic diseases has raised considerable interest in locating these repeats. Over the last 10-15 years, numerous tools have been developed for searching tandem repeats, but differences in the search algorithms adopted and difficulties with parameter settings have confounded many users resulting in widely varying results. In this review, we have systematically separated the algorithmic aspect of the search tools from the influence of the parameter settings. We hope that this will give a better understanding of how the tools differ in algorithmic performance, their inherent constraints and how one should approach in evaluating and selecting them.
Genetic algorithm based image binarization approach and its quantitative evaluation via pooling
NASA Astrophysics Data System (ADS)
Hu, Huijun; Liu, Ya; Liu, Maofu
2015-12-01
The binarized image is very critical to image visual feature extraction, especially shape feature, and the image binarization approaches have been attracted more attentions in the past decades. In this paper, the genetic algorithm is applied to optimizing the binarization threshold of the strip steel defect image. In order to evaluate our genetic algorithm based image binarization approach in terms of quantity, we propose the novel pooling based evaluation metric, motivated by information retrieval community, to avoid the lack of ground-truth binary image. Experimental results show that our genetic algorithm based binarization approach is effective and efficiency in the strip steel defect images and our quantitative evaluation metric on image binarization via pooling is also feasible and practical.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
Metamorphic approach to single quantum dot emission at 1.55 {mu}m on GaAs substrate
Semenova, E. S.; Hostein, R.; Patriarche, G.; Mauguin, O.; Largeau, L.; Robert-Philip, I.; Beveratos, A.; Lemaitre, A.
2008-05-15
We report on the fabrication and the characterization of InAs quantum dots (QDs) embedded in an indium rich In{sub 0.42}Ga{sub 0.58}As metamorphic matrix grown on a GaAs substrate. Growth conditions were chosen so as to minimize the number of threading dislocations and other defects produced during the plastic relaxation. Sharp and bright lines, originating from the emission of a few isolated single quantum dots, were observed in microphotoluminescence around 1.55 {mu}m at 5 K. They exhibit, in particular, a characteristic exciton/biexciton behavior. These QDs could offer an interesting alternative to other approaches as InAs/InP QDs for the realization of single photon emitters at telecom wavelengths.
One-qubit quantum gates in a circular graphene quantum dot: genetic algorithm approach
2013-01-01
The aim of this work was to design and control, using genetic algorithm (GA) for parameter optimization, one-charge-qubit quantum logic gates σx, σy, and σz, using two bound states as a qubit space, of circular graphene quantum dots in a homogeneous magnetic field. The method employed for the proposed gate implementation is through the quantum dynamic control of the qubit subspace with an oscillating electric field and an onsite (inside the quantum dot) gate voltage pulse with amplitude and time width modulation which introduce relative phases and transitions between states. Our results show that we can obtain values of fitness or gate fidelity close to 1, avoiding the leakage probability to higher states. The system evolution, for the gate operation, is presented with the dynamics of the probability density, as well as a visualization of the current of the pseudospin, characteristic of a graphene structure. Therefore, we conclude that is possible to use the states of the graphene quantum dot (selecting the dot size and magnetic field) to design and control the qubit subspace, with these two time-dependent interactions, to obtain the optimal parameters for a good gate fidelity using GA. PMID:23680153
An approach to select the appropriate image fusion algorithm for night vision systems
NASA Astrophysics Data System (ADS)
Schwan, Gabriele; Scherer-Negenborn, Norbert
2015-10-01
For many years image fusion has been an important subject in the image processing community. The purpose of image fusion is taking over the relevant information from two or several images to construct one result image. In the past many fusion algorithms were developed and published. Some attempts were made to assess the results from several fusion algorithms automatically with the objective of gaining the best suited output for human observers. But it was shown, that such objective machine-assessment does not always correlate with the observer's subjective perception. In this paper a novel approach is presented, which selects the appropriate fusion algorithm to receive the best image enhancement results for human observers. Assessment of the fusion algorithms' results was done based on the local contrasts. Fusion algorithms are used on a representative data set covering different use cases and image contents. These fusion results of selected data are judged subjectively by some human observers. Then the assessment algorithm with the best fit to the visual perception is used to select the best fusion algorithm for comparable scenarios.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Iterative Fourier transform algorithm: different approaches to diffractive optical element design
NASA Astrophysics Data System (ADS)
Skeren, Marek; Richter, Ivan; Fiala, Pavel
2002-10-01
This contribution focuses on the study and comparison of different design approaches for designing phase-only diffractive optical elements (PDOEs) for different possible applications in laser beam shaping. Especially, new results and approaches, concerning the iterative Fourier transform algorithm, are analyzed, implemented, and compared. Namely, various approaches within the iterative Fourier transform algorithm (IFTA) are analyzed for the case of phase-only diffractive optical elements with quantizied phase levels (either binary or multilevel structures). First, the general scheme of the IFTA iterative approach with partial quantization is briefly presented and discussed. Then, the special assortment of the general IFTA scheme is given with respect to quantization constraint strategies. Based on such a special classification, the three practically interesting approaches are chosen, further-analyzed, and compared to eachother. The performance of these algorithms is compared in detail in terms of the signal-to-noise ratio characteristic developments with respect to the numberof iterations, for various input diffusive-type objects chose. Also, the performance is documented on the complex spectra developments for typical computer reconstruction results. The advantages and drawbacks of all approaches are discussed, and a brief guide on the choice of a particular approach for typical design tasks is given. Finally, the two ways of amplitude elimination within the design procedure are considered, namely the direct elimination and partial elimination of the amplitude of the complex hologram function.
NASA Astrophysics Data System (ADS)
Vosoughifar, Hamid Reza; Sadat Shokouhi, Seyed Kazem; Dolatshah, Azam; Rahnavard, Yousef; Atapour, Hassan
2013-04-01
Blasts can produce, in a very short time, an overload much greater than the design load of a building. The blast explosion nearby or within structures causes catastrophic damage to the building both externally and internally. This study intends to model a Cold-Formed Steel (CFS) building using Finite Element Method (FEM) in which material properties of the model are defined according to results of performed laboratory tests. Then accelerograph record of a standard blast was applied to the Finite Element (FE) model. Furthermore, various Optimal Sensor Placement (OSP) algorithms were used and Genetic Algorithm (GA) was selected to act as the solution of the optimization formulation in selection of the best sensor placement according to the blast loading response of the system. In this research a novel numerical algorithm was proposed for OSP procedure which utilizes the exact value of the structural response under blast excitation. Results show that with a proper OSP method for Structural Health Monitoring (SHM) can detect the weak points of CFS structures in different parts efficiently.
Ju, Chunhua
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods. PMID:24381525
A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni-Mn-Ga
NASA Astrophysics Data System (ADS)
Faran, Eilon; Shilo, Doron
2016-09-01
The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni-Mn-Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni-Mn-Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni-Mn-Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.
A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni–Mn–Ga
NASA Astrophysics Data System (ADS)
Faran, Eilon; Shilo, Doron
2016-09-01
The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni–Mn–Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni–Mn–Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni–Mn–Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.
Brasier, Martin D; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-21
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions.
NASA Astrophysics Data System (ADS)
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-04-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth's earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions.
A Fault Diagnosis Approach for Rolling Bearings Based on EMD Method and Eigenvector Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Jinyu; Huang, Xianxiang
Fault diagnosis of rolling bearings is still a very important and difficult research task on engineering. After analyzing the shortcomings of current bearing fault diagnosis technologies, a new approach based on Empirical Mode Decomposition (EMD) and blind equalization eigenvector algorithm (EVA) for rolling bearings fault diagnosis is proposed. In this approach, the characteristic high-frequency signal with amplitude and channel modulation of a rolling bearing with local damage is first separated from the mechanical vibration signal as an Intrinsic Mode Function (IMF) by using EMD, then the source impact vibration signal yielded by local damage is extracted by means of a EVA model and algorithm. Finally, the presented approach is used to analyze an impacting experiment and two real signals collected from rolling bearings with outer race damage or inner race damage. The results show that the EMD and EVA based approach can effectively detect rolling bearing fault.
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three
ERIC Educational Resources Information Center
Moreno, Julian; Ovalle, Demetrio A.; Vicari, Rosa M.
2012-01-01
Considering that group formation is one of the key processes in collaborative learning, the aim of this paper is to propose a method based on a genetic algorithm approach for achieving inter-homogeneous and intra-heterogeneous groups. The main feature of such a method is that it allows for the consideration of as many student characteristics as…
A Fuzzy Genetic Algorithm Approach to an Adaptive Information Retrieval Agent.
ERIC Educational Resources Information Center
Martin-Bautista, Maria J.; Vila, Maria-Amparo; Larsen, Henrik Legind
1999-01-01
Presents an approach to a Genetic Information Retrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)
Prediction of Heart Attack Risk Using GA-ANFIS Expert System Prototype.
Begic Fazlic, Lejla; Avdagic, Aja; Besic, Ingmar
2015-01-01
The aim of this research is to develop a novel GA-ANFIS expert system prototype for classifying heart disease degree of a patient by using heart diseases attributes (features) and diagnoses taken in the real conditions. Thirteen attributes have been used as inputs to classifiers being based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. They are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validating of the novel GA-ANFIS system approach is performed in MATLAB environment. We compared GA-ANFIS and ANFIS results. The proposed GA-ANFIS model with the predicted value technique is more efficient when diagnosis of heart disease is concerned, as well the earlier method we got by ANFIS model. PMID:25980885
Prediction of Heart Attack Risk Using GA-ANFIS Expert System Prototype.
Begic Fazlic, Lejla; Avdagic, Aja; Besic, Ingmar
2015-01-01
The aim of this research is to develop a novel GA-ANFIS expert system prototype for classifying heart disease degree of a patient by using heart diseases attributes (features) and diagnoses taken in the real conditions. Thirteen attributes have been used as inputs to classifiers being based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for the first level of fuzzy model optimization. They are used as inputs in Genetic Algorithm (GA) for the second level of fuzzy model optimization within GA-ANFIS system. GA-ANFIS system performs optimization in two steps. Modelling and validating of the novel GA-ANFIS system approach is performed in MATLAB environment. We compared GA-ANFIS and ANFIS results. The proposed GA-ANFIS model with the predicted value technique is more efficient when diagnosis of heart disease is concerned, as well the earlier method we got by ANFIS model.
Tumuluru, J.S.; Sokhansanj, Shahabaddine
2008-12-01
Abstract In the present study, response surface method (RSM) and genetic algorithm (GA) were used to study the effects of process variables like screw speed, rpm (x1), L/D ratio (x2), barrel temperature ( C; x3), and feed mix moisture content (%; x4), on flow rate of biomass during single-screw extrusion cooking. A second-order regression equation was developed for flow rate in terms of the process variables. The significance of the process variables based on Pareto chart indicated that screw speed and feed mix moisture content had the most influence followed by L/D ratio and barrel temperature on the flow rate. RSM analysis indicated that a screw speed>80 rpm, L/D ratio> 12, barrel temperature>80 C, and feed mix moisture content>20% resulted in maximum flow rate. Increase in screw speed and L/D ratio increased the drag flow and also the path of traverse of the feed mix inside the extruder resulting in more shear. The presence of lipids of about 35% in the biomass feed mix might have induced a lubrication effect and has significantly influenced the flow rate. The second-order regression equations were further used as the objective function for optimization using genetic algorithm. A population of 100 and iterations of 100 have successfully led to convergence the optimum. The maximum and minimum flow rates obtained using GA were 13.19 10 7 m3/s (x1=139.08 rpm, x2=15.90, x3=99.56 C, and x4=59.72%) and 0.53 10 7 m3/s (x1=59.65 rpm, x2= 11.93, x3=68.98 C, and x4=20.04%).
NASA Astrophysics Data System (ADS)
Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.
1998-06-01
A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.
Parallel Genetic Algorithm for Alpha Spectra Fitting
NASA Astrophysics Data System (ADS)
García-Orellana, Carlos J.; Rubio-Montero, Pilar; González-Velasco, Horacio
2005-01-01
We present a performance study of alpha-particle spectra fitting using parallel Genetic Algorithm (GA). The method uses a two-step approach. In the first step we run parallel GA to find an initial solution for the second step, in which we use Levenberg-Marquardt (LM) method for a precise final fit. GA is a high resources-demanding method, so we use a Beowulf cluster for parallel simulation. The relationship between simulation time (and parallel efficiency) and processors number is studied using several alpha spectra, with the aim of obtaining a method to estimate the optimal processors number that must be used in a simulation.
Heo, Jun-Woo; Kim, Young-Jin; Kim, Hyun-Seok
2014-12-01
We report two approaches to fabricating high performance normally-off AIGaN/GaN high-electron mobility transistors (HEMTs). The fabrication techniques employed were based on recessed-metal-insulator-semiconductor (MIS) gate and recessed fluoride-based plasma treatment. They were selectively applied to the area under the gate electrode to deplete the two-dimensional electron gas (2-DEG) density. We found that the recessed gate structure was effective in shifting the threshold voltage by controlling the etching depth of gate region to reduce the AIGaN layer thickness to less than 8 nm. Likewise, the CF4 plasma treatment effectively incorporated negatively charged fluorine ions into the thin AIGaN barrier so that the threshold voltage shifted to higher positive values. In addition to the increased threshold voltage, experimental results showed a maximum drain current and a maximum transconductance of 315 mA/mm and 100 mS/mm, respectively, for the recessed-MIS gate HEMT, and 340 mA/mm and 330 mS/mm, respectively, for the fluoride-based plasma treated HEMT.
Review of tandem repeat search tools: a systematic approach to evaluating algorithmic performance.
Lim, Kian Guan; Kwoh, Chee Keong; Hsu, Li Yang; Wirawan, Adrianto
2013-01-01
The prevalence of tandem repeats in eukaryotic genomes and their association with a number of genetic diseases has raised considerable interest in locating these repeats. Over the last 10-15 years, numerous tools have been developed for searching tandem repeats, but differences in the search algorithms adopted and difficulties with parameter settings have confounded many users resulting in widely varying results. In this review, we have systematically separated the algorithmic aspect of the search tools from the influence of the parameter settings. We hope that this will give a better understanding of how the tools differ in algorithmic performance, their inherent constraints and how one should approach in evaluating and selecting them. PMID:22648964
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
A new approach for modulation recognition based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Liu, Shu; Wang, Hongyuan
2007-11-01
A New Approach based on ant colony algorithm for the automatic modulation recognition of communications signals is presented. This approach can discriminate between continuous wave (CW), Amplitude Modulation (AM), Frequency Modulation (FM), Frequency Shift Keying (FSK), Binary Phase Shift Keying (BPSK) and Quaternary Phase Shift Keying (QPSK) modulations. Requirements for a priori knowledge of the signals are minimized by the inclusion of an efficient carrier frequency estimator and low sensitivity to variations in the sampling epochs. Computer simulations indicate good performance on an AWGN channel, even at signal-to-noise ratios as low as 5 dB. This compares favorably with the performance obtained with most algorithms based on pattern recognition techniques.
The infection algorithm: an artificial epidemic approach for dense stereo correspondence.
Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne
2006-01-01
We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated. PMID:16953787
Low Back Pain in Children and Adolescents: an Algorithmic Clinical Approach
Kordi, Ramin; Rostami, Mohsen
2011-01-01
Low back pain (LBP) is common among children and adolescents. In younger children particularly those under 3, LBP should be considered as an alarming sign for more serious underlying pathologies. However, similar to adults, non specific low back pain is the most common type of LBP among children and adolescents. In this article, a clinical algorithmic approach to LBP in children and adolescents is presented. PMID:23056800
NASA Astrophysics Data System (ADS)
Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui
2016-08-01
Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A
2011-06-14
We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface. PMID:21594260
ERIC Educational Resources Information Center
Reese, Debbie Denise; Tabachnick, Barbara G.
2010-01-01
In this paper, the authors summarize a quantitative analysis demonstrating that the CyGaMEs toolset for embedded assessment of learning within instructional games measures growth in conceptual knowledge by quantifying player behavior. CyGaMEs stands for Cyberlearning through GaME-based, Metaphor Enhanced Learning Objects. Some scientists of…
A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions
Wollstadt, Patricia; Meyer, Ulrich; Wibral, Michael
2015-01-01
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a
Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset
NASA Technical Reports Server (NTRS)
Ramasso, Emannuel; Saxena, Abhinav
2014-01-01
Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua
2013-01-01
Objective To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). Methods We analyzed types and causes of differences in DILI case definitions provided by two institutions—Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Results Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Discussion Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Conclusions Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms. PMID:23837993
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-09-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources. PMID:20350850
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-05-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
Personalized therapy algorithms for type 2 diabetes: a phenotype-based approach.
Ceriello, Antonio; Gallo, Marco; Candido, Riccardo; De Micheli, Alberto; Esposito, Katherine; Gentile, Sandro; Medea, Gerardo
2014-01-01
Type 2 diabetes is a progressive disease with a complex and multifactorial pathophysiology. Patients with type 2 diabetes show a variety of clinical features, including different "phenotypes" of hyperglycemia (eg, fasting/preprandial or postprandial). Thus, the best treatment choice is sometimes difficult to make, and treatment initiation or optimization is postponed. This situation may explain why, despite the existing complex therapeutic armamentarium and guidelines for the treatment of type 2 diabetes, a significant proportion of patients do not have good metabolic control and at risk of developing the late complications of diabetes. The Italian Association of Medical Diabetologists has developed an innovative personalized algorithm for the treatment of type 2 diabetes, which is available online. According to the main features shown by the patient, six algorithms are proposed, according to glycated hemoglobin (HbA1c, ≥9% or ≤9%), body mass index (≤30 kg/m(2) or ≥30 kg/m(2)), occupational risk potentially related to hypoglycemia, chronic renal failure, and frail elderly status. Through self-monitoring of blood glucose, patients are phenotyped according to the occurrence of fasting/preprandial or postprandial hyperglycemia. In each of these six algorithms, the gradual choice of treatment is related to the identified phenotype. With one exception, these algorithms contain a stepwise approach for patients with type 2 diabetes who are metformin-intolerant. The glycemic targets (HbA1c, fasting/preprandial and postprandial glycemia) are also personalized. This accessible and easy to use algorithm may help physicians to choose a personalized treatment plan for each patient and to optimize it in a timely manner, thereby lessening clinical inertia. PMID:24971031
Genetic algorithms for route discovery.
Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy
2006-12-01
Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making.
Genetic algorithms for route discovery.
Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy
2006-12-01
Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making. PMID:17186801
Single-shot x-ray phase contrast imaging with an algorithmic approach using spectral detection
NASA Astrophysics Data System (ADS)
Das, Mini; Park, Chan-Soo; Fredette, Nathaniel R.
2016-04-01
X-ray phase contrast imaging has been investigated during the last two decades for potential benefits in soft tissue imaging. Long imaging time, high radiation dose and general measurement complexity involving motion of x-ray optical components have prevented the clinical translation of these methods. In all existing popular phase contrast imaging methods, multiple measurements per projection angle involving motion of optical components are required to achieve quantitatively accurate estimation of absorption, phase and differential phase. Recently we proposed an algorithmic approach to use spectral detection data in a phase contrast imaging setup to obtain absorption, phase and differential phase in a single-step. Our generic approach has been shown via simulations in all three types of phase contrast imaging: propagation, coded aperture and grating interferometry. While other groups have used spectral detector in phase contrast imaging setups, our proposed method is unique in outlining an approach to use this spectral data to simplify phase contrast imaging. In this abstract we show the first experimental proof of our single-shot phase retrieval using a Medipix3 photon counting detector in an edge illumination aperture (also referred to as coded aperture) phase contrast set up as well as for a free space propagation setup. Our preliminary results validate our new transport equation for edge illumination PCI and our spectral phase retrieval algorithm for both PCI methods being investigated. Comparison with simulations also point to excellent performance of Medipix3 built-in charge sharing correction mechanism.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments
Thomas, Brian L.; Crandall, Aaron S.; Cook, Diane J.
2016-01-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care. PMID:27453810
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
A heuristic approach based on Clarke-Wright algorithm for open vehicle routing problem.
Pichpibul, Tantikorn; Kawtummachai, Ruengsak
2013-01-01
We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62).
New approach for motion coordination of a mobile manipulator using fuzzy behavioral algorithms
NASA Astrophysics Data System (ADS)
Haeusler, Kurt; Klement, Erich P.; Zeichen, Gerfried
1998-10-01
In this paper a new approach for the coordination of the motion axes of a mobile manipulator based on fuzzy behavioral algorithms and its implementation on a physical demonstrator is presented. The kinematic redundancy of the overall system (consisting of a 7 DOF manipulator and a 3 DOF mobile robot) will be used for autonomous and reactive motion of the mobile manipulator within poorly structured and even dynamically changing surroundings. Sensors around the mobile and along the manipulator will provide the necessary information for navigation purposes and perception of the environment.
Processing approach towards the formation of thin-film Cu(In,Ga)Se2
Beck, Markus E.; Noufi, Rommel
2003-01-01
A two-stage method of producing thin-films of group IB-IIIA-VIA on a substrate for semiconductor device applications includes a first stage of depositing an amorphous group IB-IIIA-VIA precursor onto an unheated substrate, wherein the precursor contains all of the group IB and group IIIA constituents of the semiconductor thin-film to be produced in the stoichiometric amounts desired for the final product, and a second stage which involves subjecting the precursor to a short thermal treatment at 420.degree. C.-550.degree. C. in a vacuum or under an inert atmosphere to produce a single-phase, group IB-III-VIA film. Preferably the precursor also comprises the group VIA element in the stoichiometric amount desired for the final semiconductor thin-film. The group IB-IIIA-VIA semiconductor films may be, for example, Cu(In,Ga)(Se,S).sub.2 mixed-metal chalcogenides. The resultant supported group IB-IIIA-VIA semiconductor film is suitable for use in photovoltaic applications.
NASA Astrophysics Data System (ADS)
Zhang, Jingzhao; Zhang, Yiou; Tse, Kinfai; Deng, Bei; Xu, Hu; Zhu, Junyi
2016-05-01
The accurate absolute surface energies of (0001)/(000 1 ¯ ) surfaces of wurtzite structures are crucial in determining the thin film growth mode of important energy materials. However, the surface energies still remain to be solved due to the intrinsic difficulty of calculating the dangling bond energy of asymmetrically bonded surface atoms. In this study, we used a pseudo-hydrogen passivation method to estimate the dangling bond energy and calculate the polar surfaces of ZnO and GaN. The calculations were based on the pseudo chemical potentials obtained from a set of tetrahedral clusters or simple pseudo-molecules, using density functional theory approaches. The surface energies of (0001)/(000 1 ¯ ) surfaces of wurtzite ZnO and GaN that we obtained showed relatively high self-consistencies. A wedge structure calculation with a new bottom surface passivation scheme of group-I and group-VII elements was also proposed and performed to show converged absolute surface energy of wurtzite ZnO polar surfaces, and these results were also compared with the above method. The calculated results generally show that the surface energies of GaN are higher than those of ZnO, suggesting that ZnO tends to wet the GaN substrate, while GaN is unlikely to wet ZnO. Therefore, it will be challenging to grow high quality GaN thin films on ZnO substrates; however, high quality ZnO thin film on GaN substrate would be possible. These calculations and comparisons may provide important insights into crystal growth of the above materials, thereby leading to significant performance enhancements in semiconductor devices.
A conflict-free, path-level parallelization approach for sequential simulation algorithms
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.
2015-07-01
Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
Exponential Gaussian approach for spectral modeling: The EGO algorithm I. Band saturation
NASA Astrophysics Data System (ADS)
Pompilio, Loredana; Pedrazzi, Giuseppe; Sgavetti, Maria; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.
2009-06-01
Curve fitting techniques are a widespread approach to spectral modeling in the VNIR range [Burns, R.G., 1970. Am. Mineral. 55, 1608-1632; Singer, R.B., 1981. J. Geophys. Res. 86, 7967-7982; Roush, T.L., Singer, R.B., 1986. J. Geophys. Res. 91, 10301-10308; Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. They have been successfully used to model reflectance spectra of powdered minerals and mixtures, natural rock samples and meteorites, and unknown remote spectra of the Moon, Mars and asteroids. Here, we test a new decomposition algorithm to model VNIR reflectance spectra and call it Exponential Gaussian Optimization (EGO). The EGO algorithm is derived from and complementary to the MGM of Sunshine et al. [Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. The general EGO equation has been especially designed to account for absorption bands affected by saturation and asymmetry. Here we present a special case of EGO and address it to model saturated electronic transition bands. Our main goals are: (1) to recognize and model band saturation in reflectance spectra; (2) to develop a basic approach for decomposition of rock spectra, where effects due to saturation are most prevalent; (3) to reduce the uncertainty related to quantitative estimation when band saturation is occurring. In order to accomplish these objectives, we simulate flat bands starting from pure Gaussians and test the EGO algorithm on those simulated spectra first. Then we test the EGO algorithm on a number of measurements acquired on powdered pyroxenes having different compositions and average grain size and binary mixtures of orthopyroxenes with barium sulfate. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of saturation effects on reflectance spectra of powdered minerals and mixtures; (2) the systematic dilution of a strong absorber using a bright neutral material is not
Combined mixed approach algorithm for in-line phase-contrast x-ray imaging
De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto
2010-07-15
Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation
An algorithmic approach for breakage-fusion-bridge detection in tumor genomes.
Zakov, Shay; Kinsella, Marcus; Bafna, Vineet
2013-04-01
Breakage-fusion-bridge (BFB) is a mechanism of genomic instability characterized by the joining and subsequent tearing apart of sister chromatids. When this process is repeated during multiple rounds of cell division, it leads to patterns of copy number increases of chromosomal segments as well as fold-back inversions where duplicated segments are arranged head-to-head. These structural variations can then drive tumorigenesis. BFB can be observed in progress using cytogenetic techniques, but generally BFB must be inferred from data such as microarrays or sequencing collected after BFB has ceased. Making correct inferences from this data is not straightforward, particularly given the complexity of some cancer genomes and BFB's ability to generate a wide range of rearrangement patterns. Here we present algorithms to aid the interpretation of evidence for BFB. We first pose the BFB count-vector problem: given a chromosome segmentation and segment copy numbers, decide whether BFB can yield a chromosome with the given segment counts. We present a linear time algorithm for the problem, in contrast to a previous exponential time algorithm. We then combine this algorithm with fold-back inversions to develop tests for BFB. We show that, contingent on assumptions about cancer genome evolution, count vectors and fold-back inversions are sufficient evidence for detecting BFB. We apply the presented techniques to paired-end sequencing data from pancreatic tumors and confirm a previous finding of BFB as well as identify a chromosomal region likely rearranged by BFB cycles, demonstrating the practicality of our approach.
Genetic algorithm used in interference filter's design
NASA Astrophysics Data System (ADS)
Li, Jinsong; Fang, Ying; Gao, Xiumin
2009-11-01
An approach for designing of interference filter is presented by using genetic algorithm (here after refer to as GA) here. We use GA to design band stop filter and narrow-band filter. Interference filter designed here can calculate the optimal reflectivity or transmission rate. Evaluation function used in our genetic algorithm is different from the others before. Using characteristic matrix to calculate the photonic band gap of one-dimensional photonic crystal is similar to electronic structure of doped. If the evaluation is sensitive to the deviation of photonic crystal structure, the approach by genetic algorithm is effective. A summary and explains towards some uncompleted issues are given at the end of this paper.
NASA Astrophysics Data System (ADS)
Jiang, Chen; Guo, Yinbiao; Yang, Qingqing; Han, Chunguang
2010-10-01
A new approach based on an artificial neural network (ANN) was presented for the prediction of machining precision of optical aspheric grinding. The ANN model is based on Globally Convergent Adaptive Quick Back Propagation algorithm (GCAOBP). A genetic algorithm (GA) was then applied to the trained ANN model to predict the gridding precision. The integrated GCAOBP-GA algorithm was successful in predicting the Root Mean Square of profile error (RMS) of optical aspheric workpiece in parallel grinding method using machining parameters. The results of experiments have shown that RMS of machined workpiece in parallel grinding can be predicted effectively through this approach.
Algorithmic approaches for computing elementary modes in large biochemical reaction networks.
Klamt, S; Gagneur, J; von Kamp, A
2005-12-01
The concept of elementary (flux) modes provides a rigorous description of pathways in metabolic networks and proved to be valuable in a number of applications. However, the computation of elementary modes is a hard computational task that gave rise to several variants of algorithms during the last years. This work brings substantial progresses to this issue. The authors start with a brief review of results obtained from previous work regarding (a) a unified framework for elementary-mode computation, (b) network compression and redundancy removal and (c) the binary approach by which elementary modes are determined as binary patterns reducing the memory demand drastically without loss of speed. Then the authors will address herein further issues. First, a new way to perform the elementarity tests required during the computation of elementary modes which empirically improves significantly the computation time in large networks is proposed. Second, a method to compute only those elementary modes where certain reactions are involved is derived. Relying on this method, a promising approach for computing EMs in a completely distributed manner by decomposing the full problem in arbitrarity many sub-tasks is presented. The new methods have been implemented in the freely available software tools FluxAnalyzer and Metatool and benchmark tests in realistic networks emphasise the potential of our proposed algorithms.
Garbuzov, D.Z.; Martinelli, R.U.; Khalfin, V.; Lee, H.; Morris, N.A.; Taylor, G.C.; Connolly, J.C.; Charache, G.W.; DePoy, D.M.
1997-10-01
Heterojunction n-Al{sub 0.25}Ga{sub 0.75}As{sub 0.02}Sb{sub 098}/p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96} thermophotovoltaic (TPV) cells were grown by molecular-beam epitaxy on n-GaSb-substrates. In the spectral range from 1 {micro}m to 2.1 {micro}m these cells, as well as homojunction n-p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96} cells, have demonstrated internal quantum efficiencies exceeding 80%, despite about a 200 meV barrier in the conduction band at the heterointerface. Estimation shows that the thermal emission of the electrons photogenerated in p-region over this barrier can provide high efficiency for hetero-cells if the electron recombination time in p-In{sub 0.16}Ga{sub 0.84}As{sub 0.04}Sb{sub 0.96}is longer than 10 ns. Keeping the same internal efficiency as homojunction cells, hetero-cells provide a unique opportunity to decrease the dark forward current and thereby increase open circuit voltage (V{sub {proportional_to}}) and fill factor at a given illumination level. It is shown that the decrease of the forward current in hetero-cells is due to the lower recombination rate in n-type wider-bandgap space-charge region and to the suppression of the hole component of the forward current. The improvement in V{sub {proportional_to}} reaches 100% at illumination level equivalent to 1 mA/cm{sup 2} and it decreases to 5% at the highest illumination levels (2--3 A/cm{sup 2}), where the electron current component dominates in both the homo- and heterojunction cells. Values of V{sub {proportional_to}} as high as 310 meV have been obtained for a hetero-cell at illumination levels of 3 A/cm{sup 2}. Under this condition, the expected fill factor value is about 72% for a hetero-cell with improved series resistance. The heterojunction concept provides excellent prospects for further reduction of the dark forward current in TPV cells.
Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?
NASA Astrophysics Data System (ADS)
Petković, Dušan
The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.
Application of genetic algorithms to tuning fuzzy control systems
NASA Technical Reports Server (NTRS)
Espy, Todd; Vombrack, Endre; Aldridge, Jack
1993-01-01
Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.
Balima, O.; Favennec, Y.; Rousse, D.
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Lecocke, Michael; Hess, Kenneth
2007-01-01
Background We consider both univariate- and multivariate-based feature selection for the problem of binary classification with microarray data. The idea is to determine whether the more sophisticated multivariate approach leads to better misclassification error rates because of the potential to consider jointly significant subsets of genes (but without overfitting the data). Methods We present an empirical study in which 10-fold cross-validation is applied externally to both a univariate-based and two multivariate- (genetic algorithm (GA)-) based feature selection processes. These procedures are applied with respect to three supervised learning algorithms and six published two-class microarray datasets. Results Considering all datasets, and learning algorithms, the average 10-fold external cross-validation error rates for the univariate-, single-stage GA-, and two-stage GA-based processes are 14.2%, 14.6%, and 14.2%, respectively. We also find that the optimism bias estimates from the GA analyses were half that of the univariate approach, but the selection bias estimates from the GA analyses were 2.5 times that of the univariate results. Conclusions We find that the 10-fold external cross-validation misclassification error rates were very comparable. Further, we find that a two-stage GA approach did not demonstrate a significant advantage over a 1-stage approach. We also find that the univariate approach had higher optimism bias and lower selection bias compared to both GA approaches. PMID:19458774
New approaches for a solar-pumped GaAs laser
NASA Astrophysics Data System (ADS)
Landis, Geoffrey A.
1992-09-01
Approaches are discussed for a direct solar-pumped semiconductor laser. Efficiencies of 35% should be achievable. The intensity threshold can be decreased by using a wider bandgap material for the absorber material than for the lasing material and by the use of light-trapping structures. The calculated minimum threshold is about 50 times the solar concentration (without light-trapping) or about one solar concentration (with light-trapping).
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665
Armañanzas, Rubén; Saeys, Yvan; Inza, Iñaki; García-Torres, Miguel; Bielza, Concha; van de Peer, Yves; Larrañaga, Pedro
2011-01-01
Progress is continuously being made in the quest for stable biomarkers linked to complex diseases. Mass spectrometers are one of the devices for tackling this problem. The data profiles they produce are noisy and unstable. In these profiles, biomarkers are detected as signal regions (peaks), where control and disease samples behave differently. Mass spectrometry (MS) data generally contain a limited number of samples described by a high number of features. In this work, we present a novel class of evolutionary algorithms, estimation of distribution algorithms (EDA), as an efficient peak selector in this MS domain. There is a trade-of f between the reliability of the detected biomarkers and the low number of samples for analysis. For this reason, we introduce a consensus approach, built upon the classical EDA scheme, that improves stability and robustness of the final set of relevant peaks. An entire data workflow is designed to yield unbiased results. Four publicly available MS data sets (two MALDI-TOF and another two SELDI-TOF) are analyzed. The results are compared to the original works, and a new plot (peak frequential plot) for graphically inspecting the relevant peaks is introduced. A complete online supplementary page, which can be found at http://www.sc.ehu.es/ccwbayes/members/ruben/ms, includes extended info and results, in addition to Matlab scripts and references.
Zelken, Jonathan A; AlDeek, Nidal F; Hsu, Chung-Chen; Chang, Nai-Jen; Lin, Chih-Hung; Lin, Cheng-Hung
2016-02-01
Lower abdominal, perineal, and groin (LAPG) reconstruction may be performed in a single stage. Anterolateral thigh (ALT) flaps are preferred here and taken as fasciocutaneous (ALT-FC), myocutaneous (ALT-MC), or vastus lateralis myocutaneous (VL-MC) flaps. We aim to present the results of reconstruction from a series of patients and guide flap selection with an algorithmic approach to LAPG reconstruction that optimizes outcomes and minimizes morbidity. Lower abdomen, groin, perineum, vulva, vagina, scrotum, and bladder wounds reconstructed in 22 patients using ALT flaps between 2000 and 2013 were retrospectively studied. Five ALT-FC, eight ALT-MC, and nine VL-MC flaps were performed. All flaps survived. Venous congestion occurred in three VL-MC flaps from mechanical cause. Wound infection occurred in six cases. Urinary leak occurred in three cases of bladder reconstruction. One patient died from congestive heart failure. The ALT flap is time tested and dependably addresses most LAPG defects; flap variations are suited for niche defects. We propose a novel algorithm to guide reconstructive decision-making.
Du, Hubing; Gao, Honghong
2016-08-20
Affected by the height dependent effects, the phase-shifting shadow moiré can only be implemented in an approximate way. In the technique, a fixed phase step around π/2 rad between two adjacent frames is usually introduced by a grating translation in its own plane. So the method is not flexible in some situations. Additionally, because the shadow moiré fringes have a complex intensity distribution, computing the introduced phase shift from the existing arccosine function or arcsine function-based phase shift extraction algorithm always exhibits instability. To solve it, we developed a Gram-Schmidt orthonormalization approach based on a three-frame self-calibration phase-shifting algorithm with equal but unknown phase steps. The proposed method using the arctangent function is fast and can be implemented robustly in many applications. We also do optical experiments to demonstrate the correction of the proposed method by referring to the result of the conventional five-step phase-shifting shadow moiré. The results show the correctness of the proposed method. PMID:27556993
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository.
Armañanzas, Rubén; Saeys, Yvan; Inza, Iñaki; García-Torres, Miguel; Bielza, Concha; van de Peer, Yves; Larrañaga, Pedro
2011-01-01
Progress is continuously being made in the quest for stable biomarkers linked to complex diseases. Mass spectrometers are one of the devices for tackling this problem. The data profiles they produce are noisy and unstable. In these profiles, biomarkers are detected as signal regions (peaks), where control and disease samples behave differently. Mass spectrometry (MS) data generally contain a limited number of samples described by a high number of features. In this work, we present a novel class of evolutionary algorithms, estimation of distribution algorithms (EDA), as an efficient peak selector in this MS domain. There is a trade-of f between the reliability of the detected biomarkers and the low number of samples for analysis. For this reason, we introduce a consensus approach, built upon the classical EDA scheme, that improves stability and robustness of the final set of relevant peaks. An entire data workflow is designed to yield unbiased results. Four publicly available MS data sets (two MALDI-TOF and another two SELDI-TOF) are analyzed. The results are compared to the original works, and a new plot (peak frequential plot) for graphically inspecting the relevant peaks is introduced. A complete online supplementary page, which can be found at http://www.sc.ehu.es/ccwbayes/members/ruben/ms, includes extended info and results, in addition to Matlab scripts and references. PMID:21393653
A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform
Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.
2016-01-01
The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665
Dalzini, Annalisa; Bergamini, Christian; Biondi, Barbara; De Zotti, Marta; Panighel, Giacomo; Fato, Romana; Peggion, Cristina; Bortolus, Marco; Maniero, Anna Lisa
2016-01-01
Peptaibols are peculiar peptides produced by fungi as weapons against other microorganisms. Previous studies showed that peptaibols are promising peptide-based drugs because they act against cell membranes rather than a specific target, thus lowering the possibility of the onset of multi-drug resistance, and they possess non-coded α-amino acid residues that confer proteolytic resistance. Trichogin GA IV (TG) is a short peptaibol displaying antimicrobial and cytotoxic activity. In the present work, we studied thirteen TG analogues, adopting a multidisciplinary approach. We showed that the cytotoxicity is tuneable by single amino-acids substitutions. Many analogues maintain the same level of non-selective cytotoxicity of TG and three analogues are completely non-toxic. Two promising lead compounds, characterized by the introduction of a positively charged unnatural amino-acid in the hydrophobic face of the helix, selectively kill T67 cancer cells without affecting healthy cells. To explain the determinants of the cytotoxicity, we investigated the structural parameters of the peptides, their cell-binding properties, cell localization, and dynamics in the membrane, as well as the cell membrane composition. We show that, while cytotoxicity is governed by the fine balance between the amphipathicity and hydrophobicity, the selectivity depends also on the expression of negatively charged phospholipids on the cell surface. PMID:27039838
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm.
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others.
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm.
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
Shang, J.S.; Andrienko, D.A.; Huang, P.G.; Surzhikov, S.T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical–physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss–Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Nakashima, Megan O.
2014-01-01
Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results. PMID:25025009
Nakashima, Megan O; Rogers, Heesun J
2014-06-01
Hypercoagulability can result from a variety of inherited and, more commonly, acquired conditions. Testing for the underlying cause of thrombosis in a patient is complicated both by the number and variety of clinical conditions that can cause hypercoagulability as well as the many potential assay interferences. Using an algorithmic approach to hypercoagulability testing provides the ability to tailor assay selection to the clinical scenario. It also reduces the number of unnecessary tests performed, saving cost and time, and preventing potential false results. New oral anticoagulants are powerful tools for managing hypercoagulable patients; however, their use introduces new challenges in terms of test interpretation and therapeutic monitoring. The coagulation laboratory plays an essential role in testing for and treating hypercoagulable states. The input of laboratory professionals is necessary to guide appropriate testing and synthesize interpretation of results.
A non-subjective approach to the GP algorithm for analysing noisy time series
NASA Astrophysics Data System (ADS)
Harikrishnan, K. P.; Misra, R.; Ambika, G.; Kembhavi, A. K.
2006-03-01
We present an adaptation of the standard Grassberger Proccacia (GP) algorithm for estimating the correlation dimension of a time series in a non-subjective manner. The validity and accuracy of this approach are tested using different types of time series, such as those from standard chaotic systems, pure white and colored noise and chaotic systems with added noise. The effectiveness of the scheme in analysing noisy time series, particularly those involving colored noise, is investigated. One interesting result we have obtained is that, for the same percentage of noise addition, data with colored noise is more distinguishable from the corresponding surrogates than data with white noise. As examples of real life applications, analyses of data from an astrophysical X-ray object and a human brain EEG are presented.
NASA Astrophysics Data System (ADS)
Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz
2015-09-01
Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.
NASA Astrophysics Data System (ADS)
Deng, Jianbo; Liu, Yi; Yang, Dongxu; Cai, Zhaonan
2014-05-01
Satellite measurements of column-averaged dry air mole fractions of CH4 (XCH4) in shortwave infrared (SWIR) with very high spectral resolution and high sensitivity near the surface, such as the Thermal And Near-infrared Sensor for carbon Observation (TANSO) onboard the Green gas Observing SATellite (GOSAT, launched 2009), are expected to provide the large spatial and temporal information on the sources and sinks of CH4, which would contribute to the understanding of CH4 variation in global region and its impact on climate change. One of the important science requirements of monitoring CH4 from hypsespectral measurements is to establish a highly accurate retrieval algorithm. To approach XCH4efficiently, we developed a SWIR two-band (5900-6150 cm-1 and 4800-4900 cm-1) physical retrieval algorithm after a series of sensitivity study. The forward model in this algorithm was based on a vector linearized discrete ordinate radiative transfer (VLIDORT) model coupled with a line-by-line radiative transfer model (LBLRTM), which was applied to realize online calculation of absorption coefficient and backscattered solar radiance. The information content of CH4, H2O, CO2 and temperature in different retrieval band and bands combination was investigated in order to improve the algorithm. The selected retrieval bands retains more than 90% of the information content of CH4, CO2, and temperature, and more than 85% of that of H2O. The sensitivity studies demonstrate that the uncertainty of H2O, temperature and CO2 will cause unacceptable errors if they were ignored, for example, a 10% bias on H2O profile will lead to 50 ppb retrieval error, and a 5 K shift on temperature profile will cause 20 ppb error to the result while CO2 has little influence. The simulated retrieval test shows it is more efficient to revise the influence of temperature and H2O with a profile model than with a temperature offset and a H2O scale factor model. A preliminarily retrieval test using GOSAT Level 1B
Exponential Gaussian approach for spectral modelling: The EGO algorithm II. Band asymmetry
NASA Astrophysics Data System (ADS)
Pompilio, Loredana; Pedrazzi, Giuseppe; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.
2010-08-01
The present investigation is complementary to a previous paper which introduced the EGO approach to spectral modelling of reflectance measurements acquired in the visible and near-IR range (Pompilio, L., Pedrazzi, G., Sgavetti, M., Cloutis, E.A., Craig, M.A., Roush, T.L. [2009]. Icarus, 201 (2), 781-794). Here, we show the performances of the EGO model in attempting to account for temperature-induced variations in spectra, specifically band asymmetry. Our main goals are: (1) to recognize and model thermal-induced band asymmetry in reflectance spectra; (2) to develop a basic approach for decomposition of remotely acquired spectra from planetary surfaces, where effects due to temperature variations are most prevalent; (3) to reduce the uncertainty related to quantitative estimation of band position and depth when band asymmetry is occurring. In order to accomplish these objectives, we tested the EGO algorithm on a number of measurements acquired on powdered pyroxenes at sample temperature ranging from 80 up to 400 K. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of band asymmetry on reflectance spectra; (2) the returned set of EGO parameters can suggest the influence of some additional effect other than the electronic transition responsible for the absorption feature; (3) the returned set of EGO parameters can help in estimating the surface temperature of a planetary body; (4) the occurrence of absorptions which are less affected by temperature variations can be mapped for minerals and thus used for compositional estimates. Further work is still required in order to analyze the behaviour of the EGO algorithm with respect to temperature-induced band asymmetry using powdered pyroxene spanning a range of compositions and grain sizes and more complex band shapes.
Kurtz, S.; Wanlass, M.; Kramer, C.; Young, M.; Geisz, J.; Ward, S.; Duda, A.; Moriarty, T.; Carapella, J.; Ahrenkiel, P.; Emery. K.; Jones, K.; Romero, M.; Kibbler, A.; Olson, J.; Friedman, D.; McMahon, W.; Ptak, A.
2005-11-01
GaInP/GaAs/GaInAs three-junction cells are grown in an inverted configuration on GaAs, allowing high quality growth of the lattice matched GaInP and GaAs layers before a grade is used for the 1-eV GaInAs layer. Using this approach an efficiency of 37.9% was demonstrated.
Carbon, oxygen, boron, hydrogen and nitrogen in the LEC growth of SI GaAs: a thermochemical approach
NASA Astrophysics Data System (ADS)
Korb, J.; Flade, T.; Jurisch, M.; Köhler, A.; Reinhold, Th; Weinert, B.
1999-03-01
The ChemSage code [Eriksson and Hack, Metall. Trans. B 12 (1990) 1013] to minimize the total Gibbs free energy was used to calculate phase equilibria in the complex thermochemical system representing LEC GaAs crystal growth which comprises the growth atmosphere, the liquid boron oxide, the GaAs melt and solid phases including the GaAs crystal. The behaviour of C, B, O, N and H in the crystal growth melt at 1509.42 K is investigated in dependence on relevant technological parameters.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems
Chen, Hsin-Chu; Warsi, N.A.; Sameh, A.
1996-12-31
In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).
Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.
Verdaguer, Marta; Molinos-Senante, María; Poch, Manel
2016-04-01
Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management.
Diffuse lung disease of infancy: a pattern-based, algorithmic approach to histological diagnosis.
Armes, Jane E; Mifsud, William; Ashworth, Michael
2015-02-01
Diffuse lung disease (DLD) of infancy has multiple aetiologies and the spectrum of disease is substantially different from that seen in older children and adults. In many cases, a specific diagnosis renders a dire prognosis for the infant, with profound management implications. Two recently published series of DLD of infancy, collated from the archives of specialist centres, indicate that the majority of their cases were referred, implying that the majority of biopsies taken for DLD of infancy are first received by less experienced pathologists. The current literature describing DLD of infancy takes a predominantly aetiological approach to classification. We present an algorithmic, histological, pattern-based approach to diagnosis of DLD of infancy, which, with the aid of appropriate multidisciplinary input, including clinical and radiological expertise and ancillary diagnostic studies, may lead to an accurate and useful interim report, with timely exclusion of inappropriate diagnoses. Subsequent referral to a specialist centre for confirmatory diagnosis will be dependent on the individual case and the decision of the multidisciplinary team.
A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
2000-01-01
Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.
Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.
Verdaguer, Marta; Molinos-Senante, María; Poch, Manel
2016-04-01
Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. PMID:26868846
One-year results of an algorithmic approach to managing failed back surgery syndrome
Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia
2014-01-01
BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573
NASA Technical Reports Server (NTRS)
Hoang, TY
1994-01-01
A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).
Pitschner, H F; Berkowitsch, A
2001-01-01
Symbolic dynamics as a non linear method and computation of the normalized algorithmic complexity (C alpha) was applied to basket-catheter mapping of atrial fibrillation (AF) in the right human atrium. The resulting different degrees of organisation of AF have been compared to conventional classification of Wells. Short time temporal and spatial distribution of the C alpha during AF and effects of propafenone on this distribution have been investigated in 30 patients. C alpha was calculated for a moving window. Generated C alpha was analyzed within 10 minutes before and after administration of propafenone. The inter-regional C alpha distribution was statistically analyzed. Inter-regional C alpha differences were found in all patients (p < 0.001). The right atrium could be divided in high- and low complexity areas according to individual patterns. A significant C alpha increase in cranio-caudal direction was confirmed inter-individually (p < 0.01). The administration of propafenone enlarged the areas of low complexity.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
NASA Astrophysics Data System (ADS)
Della Mora, S.; Boschi, L.; Becker, T. W.; Giardini, D.
2010-12-01
The wavelength spectrum of three-dimensional (3D) heterogeneity naturally reflects the nature of Earth dynamics, and is in its own right an important constraint for geodynamical modeling. The Earth's spectrum has been usually evaluated indirectly, on the basis of previously derived tomographic models. If the geographic distribution of seismic heterogeneities is neglected, however, one can invert global seismic data directly to find the spectrum of the Earth. Inverting for the spectrum is in principle (fewer unknowns) cheaper and robust than inverting for the 3D structure of a planet: this should allow us to constrain planetary structure at smaller scales than by current 3D models. Based on the work of Gudmundsson and coworkers in the early 1990s, we have developed a linear algorithm for surface waves. The spectra we obtain are in qualitative agreement with results from 3D tomography, but the resolving power is generally lower, due to the simplifications required to linearise the ``spectral'' inversion. To overcome this problem, we performed full nonlinear inversions of synthetically generated and real datasets, and compare the obtained spectra with the input and tomographic models respectively. The inversions are calculated on a distributed memory parallel nodes cluster, employing the MPI package. An evolutionary strategy approach is used to explore the parameter space, using the PIKAIA software. The first preliminary results show a resolving power higher than that of linearised inversion. This confirms that the approximations required in the linear formulation affect the solution quality, and suggests that the nonlinear approach might effectively help to constrain the heterogeneity spectrum more robustly than currently possible.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
A multi-layer cellular automata approach for algorithmic generation of virtual case studies: VIBe.
Sitzenfrei, R; Fach, S; Kinzel, H; Rauch, W
2010-01-01
Analyses of case studies are used to evaluate new or existing technologies, measures or strategies with regard to their impact on the overall process. However, data availability is limited and hence, new technologies, measures or strategies can only be tested on a limited number of case studies. Owing to the specific boundary conditions and system properties of each single case study, results can hardly be generalized or transferred to other boundary conditions. virtual infrastructure benchmarking (VIBe) is a software tool which algorithmically generates virtual case studies (VCSs) for urban water systems. System descriptions needed for evaluation are extracted from VIBe whose parameters are based on real world case studies and literature. As a result VIBe writes Input files for water simulation software as EPANET and EPA SWMM. With such input files numerous simulations can be performed and the results can be benchmarked and analysed stochastically at a city scale. In this work the approach of VIBe is applied with parameters according to a section of the Inn valley and therewith 1,000 VCSs are generated and evaluated. A comparison of the VCSs with data of real world case studies shows that the real world case studies fit within the parameter ranges of the VCSs. Consequently, VIBe tackles the problem of limited availability of case study data.
NASA Astrophysics Data System (ADS)
Kelly, Patrick M.; Cannon, T. Michael; Hush, Donald R.
1995-03-01
CANDID (comparison algorithm for navigating digital image databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by- example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a `global signature' is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, we present CANDID and highlight two results from our current research: subtracting a `background' signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Local and collective magnetism of gallium vacancies in GaN studied by GGA+U approach
NASA Astrophysics Data System (ADS)
Volnianska, O.; Boguslawski, P.
2016-03-01
Magnetic properties of Ga vacancies VGa and vacancy pairs in GaN are analyzed by employing the Generalized Gradient Approximation with the +U corrections. Strong spin polarization stabilizes high spin configurations of VGa, and leads to its negative-Ueff character. Both features are reflected in the magnetic properties of vacancies pairs. Because of electron transfer between two vacancies induced by negative-Ueff, both vacancies can be in different charge and spin states, and thus the spin ground states of a pair can be ferrimagnetic rather than ferro- or antiferromagnetic. Magnetic coupling of VGa-VGa pairs as a function of separation between the defects, their relative orientation, and of the charge state, was calculated. The strength of magnetic coupling is reduced by the U-induced localization of the wave functions. The obtained results show that gallium vacancies can lead to the observed ferromagnetism in irradiated GaN samples.
Improvements in the sensibility of MSA-GA tool using COFFEE objective function
NASA Astrophysics Data System (ADS)
Amorim, A. R.; Zafalon, G. F. D.; Neves, L. A.; Pinto, A. R.; Valêncio, C. R.; Machado, J. M.
2015-01-01
The sequence alignment is one of the most important tasks in Bioinformatics, playing an important role in the sequences analysis. There are many strategies to perform sequence alignment, since those use deterministic algorithms, as dynamic programming, until those ones, which use heuristic algorithms, as Progressive, Ant Colony (ACO), Genetic Algorithms (GA), Simulated Annealing (SA), among others. In this work, we have implemented the objective function COFFEE in the MSA-GA tool, in substitution of Weighted Sum-of-Pairs (WSP), to improve the final results. In the tests, we were able to verify the approach using COFFEE function achieved better results in 81% of the lower similarity alignments when compared with WSP approach. Moreover, even in the tests with more similar sets, the approach using COFFEE was better in 43% of the times.
Brasier, Martin D.; Antcliffe, Jonathan; Saunders, Martin; Wacey, David
2015-01-01
New analytical approaches and discoveries are demanding fresh thinking about the early fossil record. The 1.88-Ga Gunflint chert provides an important benchmark for the analysis of early fossil preservation. High-resolution analysis of Gunflintia shows that microtaphonomy can help to resolve long-standing paleobiological questions. Novel 3D nanoscale reconstructions of the most ancient complex fossil Eosphaera reveal features hitherto unmatched in any crown-group microbe. While Eosphaera may preserve a symbiotic consortium, a stronger conclusion is that multicellular morphospace was differently occupied in the Paleoproterozoic. The 3.46-Ga Apex chert provides a test bed for claims of biogenicity of cell-like structures. Mapping plus focused ion beam milling combined with transmission electron microscopy data demonstrate that microfossil-like taxa, including species of Archaeoscillatoriopsis and Primaevifilum, are pseudofossils formed from vermiform phyllosilicate grains during hydrothermal alteration events. The 3.43-Ga Strelley Pool Formation shows that plausible early fossil candidates are turning up in unexpected environmental settings. Our data reveal how cellular clusters of unexpectedly large coccoids and tubular sheath-like envelopes were trapped between sand grains and entombed within coatings of dripstone beach-rock silica cement. These fossils come from Earth’s earliest known intertidal to supratidal shoreline deposit, accumulated under aerated but oxygen poor conditions. PMID:25901305
A new damping factor algorithm based on line search of the local minimum point for inverse approach
NASA Astrophysics Data System (ADS)
Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping
2013-05-01
The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.
New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji
Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.
ERIC Educational Resources Information Center
Uno, Mariko
2016-01-01
This study investigates the emergence and development of the discourse-pragmatic functions of the Japanese subject markers "wa" and "ga" from a usage-based perspective (Tomasello, 2000). The use of each marker in longitudinal speech data for four Japanese children from 1;0 to 3;1 and their parents available in the CHILDES…
A novel evolutionary approach for optimizing content-based image indexing algorithms.
Saadatmand-Tarzjan, Mahdi; Moghaddam, Hamid Abrishami
2007-02-01
Optimization of content-based image indexing and retrieval (CBIR) algorithms is a complicated and time-consuming task since each time a parameter of the indexing algorithm is changed, all images in the database should be indexed again. In this paper, a novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms. In the new evolutionary algorithm, the image database is partitioned into several smaller subsets, and each subset is used by an updating process as training patterns for each chromosome during evolution. This is in contrast to genetic algorithms that use the whole database as training patterns for evolution. Additionally, for each chromosome, a parameter called age is defined that implies the progress of the updating process. Similarly, the genes of the proposed chromosomes are divided into two categories: evolutionary genes that participate to evolution and history genes that save previous states of the updating process. Furthermore, a new fitness function is defined which evaluates the fitness of the chromosomes of the current population with different ages in each generation. We used EGA to optimize the quantization thresholds of the wavelet-correlogram algorithm for CBIR. The optimal quantization thresholds computed by EGA improved significantly all the evaluation measures including average precision, average weighted precision, average recall, and average rank for the wavelet-correlogram method.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
Efficiently Hiding Sensitive Itemsets with Transaction Deletion Based on Genetic Algorithms
Zhang, Binbin; Yang, Kuo-Tung; Hong, Tzung-Pei
2014-01-01
Data mining is used to mine meaningful and useful information or knowledge from a very large database. Some secure or private information can be discovered by data mining techniques, thus resulting in an inherent risk of threats to privacy. Privacy-preserving data mining (PPDM) has thus arisen in recent years to sanitize the original database for hiding sensitive information, which can be concerned as an NP-hard problem in sanitization process. In this paper, a compact prelarge GA-based (cpGA2DT) algorithm to delete transactions for hiding sensitive itemsets is thus proposed. It solves the limitations of the evolutionary process by adopting both the compact GA-based (cGA) mechanism and the prelarge concept. A flexible fitness function with three adjustable weights is thus designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets with minimal side effects of hiding failure, missing cost, and artificial cost. Experiments are conducted to show the performance of the proposed cpGA2DT algorithm compared to the simple GA-based (sGA2DT) algorithm and the greedy approach in terms of execution time and three side effects. PMID:25254239
2012-01-01
RA is a syndrome consisting of different pathogenetic subsets in which distinct molecular mechanisms may drive common final pathways. Recent work has provided proof of principle that biomarkers may be identified predictive of the response to targeted therapy. Based on new insights, an initial treatment algorithm is presented that may be used to guide treatment decisions in patients who have failed one TNF inhibitor. Key questions in this algorithm relate to the question whether the patient is a primary vs a secondary non-responder to TNF blockade and whether the patient is RF and/or anti-citrullinated peptide antibody positive. This preliminary algorithm may contribute to more cost-effective treatment of RA, and provides the basis for more extensive algorithms when additional data become available. PMID:21890615
Asgari, Mohammad; Soltani, Nasim Yahya; Riahi, Ali
2010-01-01
There are varieties of wideband direction-of-arrival (DOA) estimation algorithms. Their structure comprises a number of narrowband ones, each performs in one frequency in a given bandwidth, and then different responses should be combined in a proper way to yield true DOAs. Hence, wideband algorithms are always complex and so non-real-time. This paper investigates a method to derive a flat response of narrowband multiple signal classification (MUSIC) [R. O. Schmidt, IEEE Trans. Antennas Propag., 34, 276-280 (1986)] algorithm in the whole frequencies of given band. Therefore, required conditions of applying narrowband algorithm on wideband impinging signals will be given through a concrete analysis. It could be found out that array sensor locations are able to compensate the frequency variations to reach a flat response of DOAs in a specified wideband frequency. PMID:20058975
Symbolic integration of a class of algebraic functions. [by an algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
An algorithm is presented for the symbolic integration of a class of algebraic functions. This class consists of functions made up of rational expressions of an integration variable x and square roots of polynomials, trigonometric and hyperbolic functions of x. The algorithm is shown to consist of the following components:(1) the reduction of input integrands to conical form; (2) intermediate internal representations of integrals; (3) classification of outputs; and (4) reduction and simplification of outputs to well-known functions.
A new approach to optic disc detection in human retinal images using the firefly algorithm.
Rahebi, Javad; Hardalaç, Fırat
2016-03-01
There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2016-01-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
Minority carrier properties of carbon-doped GaInAsN bipolar transistors
NASA Astrophysics Data System (ADS)
Welser, R. E.; Setzko, R. S.; Stevens, K. S.; Rehder, E. M.; Lutz, C. R.; Hill, D. S.; Zampardi, P. J.
2004-08-01
We have developed an InGaP/GaInAsN/GaAs double heterojunction bipolar transistor technology that substantially improves upon existing GaAs-based HBTs. Band-gap engineering with dilute nitride GaInAsN alloys is utilized to enhance a variety of key device characteristics, including lower operating voltages, improved temperature stability and increased RF performance. Furthermore, GaInAsN-based HBTs are fully compatible with existing high-volume MOVPE and IC fabrication processes. While poor lifetimes have limited the applicability of dilute nitride materials in photovoltaic applications, we achieve minority carrier characteristics that approach those of conventional GaAs HBTs. We have found that a combination of growth algorithm optimization and compositional grading are critical for improving minority carrier properties in GaInAsN. In this work, we characterize the impact of both carbon and nitrogen doping on minority carrier lifetimes in GaInAsN base layers. Minority carrier lifetimes are extracted from direct measurements on bipolar transistor device structures. Specifically, lifetime is derived from the DC current gain, or bgr, taken in the bias regime dominated by neutral base recombination. Lifetimes extracted using this technique are observed to be inversely proportional to both carbon and nitrogen doping. As with conventional C-doped GaAs HBTs, current soaking (i.e. burn-in) is found to have a significant impact on GaInAsN HBTs. While we can replicate poor as-grown lifetimes consistent with those reported in photovoltaic dilute nitride materials, our best material to date exhibits nearly 30 × higher lifetime after current soaking.
NASA Astrophysics Data System (ADS)
Dalzell, B. J.; Gassman, P. W.; Kling, C.
2015-12-01
In the Minnesota River Basin, sediments originating from failing stream banks and bluffs account for the majority of the riverine load and contribute to water quality impairments in the Minnesota River as well as portions of the Mississippi River upstream of Lake Pepin. One approach for mitigating this problem may be targeted wetland restoration in Minnesota River Basin tributaries in order to reduce the magnitude and duration of peak flow events which contribute to bluff and stream bank failures. In order to determine effective arrangements and properties of wetlands to achieve peak flow reduction, we are employing a genetic algorithm approach coupled with a SWAT model of the Cottonwood River, a tributary of the Minnesota River. The genetic algorithm approach will evaluate combinations of basic wetland features as represented by SWAT: surface area, volume, contributing area, and hydraulic conductivity of the wetland bottom. These wetland parameters will be weighed against economic considerations associated with land use trade-offs in this agriculturally productive landscape. Preliminary results show that the SWAT model is capable of simulating daily hydrology very well and genetic algorithm evaluation of wetland scenarios is ongoing. Anticipated results will include (1) combinations of wetland parameters that are most effective for reducing peak flows, and (2) evaluation of economic trade-offs between wetland restoration, water quality, and agricultural productivity in the Cottonwood River watershed.
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek
2009-01-01
This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.
NASA Technical Reports Server (NTRS)
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2011-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
NASA Astrophysics Data System (ADS)
Meyer, Ulrich; Negoescu, Andrei; Weichert, Volker
Despite disillusioning worst-case behavior, classic algorithms for single-source shortest-paths (SSSP) like Bellman-Ford are still being used in practice, especially due to their simple data structures. However, surprisingly little is known about the average-case complexity of these approaches. We provide new theoretical and experimental results for the performance of classic label-correcting SSSP algorithms on graph classes with non-negative random edge weights. In particular, we prove a tight lower bound of Ω(n 2) for the running times of Bellman-Ford on a class of sparse graphs with O(n) nodes and edges; the best previous bound was Ω(n 4/3 - ɛ ). The same improvements are shown for Pallottino's algorithm. We also lift a lower bound for the approximate bucket implementation of Dijkstra's algorithm from Ω(n logn / loglogn) to Ω(n 1.2 - ɛ ). Furthermore, we provide an experimental evaluation of our new graph classes in comparison with previously used test inputs.
ERIC Educational Resources Information Center
Durnin, John H.; Scandura, Joseph M.
For individualized or computer assisted instruction, norm referenced testing is inadequate to determine each individual's mastery on specific kinds of tasks. Hively's item forms and Ferguson's stratified item forms, both based on observable characteristics of the problems, and Scandura's algorithmic technology, positing that persons use rules to…
Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction
NASA Technical Reports Server (NTRS)
Velusamy, T.; Marsh, K. A.; Ware, B.
2005-01-01
TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.
Premaladha, J; Ravichandran, K S
2016-04-01
Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies. PMID:26872778
A Low-Tech, Hands-On Approach To Teaching Sorting Algorithms to Working Students.
ERIC Educational Resources Information Center
Dios, R.; Geller, J.
1998-01-01
Focuses on identifying the educational effects of "activity oriented" instructional techniques. Examines which instructional methods produce enhanced learning and comprehension. Discusses the problem of learning "sorting algorithms," a major topic in every Computer Science curriculum. Presents a low-tech, hands-on teaching method for sorting…
Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin
2016-01-01
Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323
Premaladha, J; Ravichandran, K S
2016-04-01
Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.
Aorigele; Zeng, Weiming; Hong, Xiaomin
2016-01-01
Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
NASA Astrophysics Data System (ADS)
Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.
2016-02-01
This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-12-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Ab-initio study of magnetic properties and phase transitions in Ga (Mn) N with Monte Carlo approach
NASA Astrophysics Data System (ADS)
Sbai, Y.; Ait Raiss, A.; Salmani, E.; Bahmad, L.; Benyoussef, A.
2015-12-01
On the basis of ab-initio calculations and Monte Carlo simulations the magnetic and electronic properties of Gallium nitride (GaN) doped with the transition metal Manganese (Mn) were studied. The ab initio calculations were done using the AKAI-KKR-CPA method within the Local Density Approximation (LDA) approximation. We doped our Diluted Magnetic Semiconductor (DMS), with different concentrations of magnetic impurities Mn and plotted the density of state (DOS) for each one. Showing a half-metallic behavior and ferromagnetic state especially for Ga0.95Mn0.05N making this DMS a strong candidate for spintronic applications. Moreover, the magnetization and susceptibility of our system as a function of the temperature has been calculated and give for various system size L to study the size effect. In addition, the transition temperature was deduced from the peak of the susceptibility. The Ab initio results are in good agreement with literature especially for (x=0.05) of Mn which gives the most interesting results.
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.
1975-01-01
Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.
Actuator Placement Via Genetic Algorithm for Aircraft Morphing
NASA Technical Reports Server (NTRS)
Crossley, William A.; Cook, Andrea M.
2001-01-01
This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
NASA Astrophysics Data System (ADS)
Riha, Stefan; Krawczyk, Harald
2011-11-01
Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical
Plot enchaining algorithm: a novel approach for clustering flocks of birds
NASA Astrophysics Data System (ADS)
Büyükaksoy Kaplan, Gülay; Lana, Adnan
2014-06-01
In this study, an intuitive way for tracking flocks of birds is proposed and compared to simple cluster-seeking algorithm for real radar observations. For group of targets such as flock of birds, there is no need to track each target individually. Instead a cluster can be used to represent closely spaced tracks of a possible group. Considering a group of targets as a single target for tracking provides significant performance improvement with almost no loss of information.
Thermoluminescence curves simulation using genetic algorithm with factorial design
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-05-01
The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.
Patsatzis, Dimitris G; Maris, Dimitris T; Goussis, Dimitris A
2016-06-01
A detailed analysis is reported on a multiscale pharmacokinetic model, simulating the interaction of a drug with its target, the binding of the compounds and the outcome of their interaction. The analysis is based on the algorithmic computational singular perturbation (CSP) methodology. Among others, the analysis concludes that the partial equilibrium approximation and the quasi-steady-state approximation (PEA and QSSA) are valid in two distinct stages in the evolution of the process. Similar conclusions are reached from the algorithmic criteria for the validity of the QSSA and PEA. The reactions in the pharmacokinetic model that (i) generate the fast time scales, (ii) generate the constraints in which the system evolves and (iii) drive the system at various phases are identified, with the use of algorithmic CSP tools. These identifications are very important for the improvement of the model and for the identification of ways to control the evolution of the process. Regarding the qualitative understanding of the process, the present analysis systematises the findings in the literature and provides some new insights. PMID:27271122
Lee, Ming-Lun; Yeh, Yu-Hsiang; Tu, Shang-Ju; Chen, P C; Lai, Wei-Chih; Sheu, Jinn-Kong
2015-04-01
Non-planar InGaN/GaN multiple quantum well (MQW) structures are grown on a GaN template with truncated hexagonal pyramids (THPs) featuring c-plane and r-plane surfaces. The THP array is formed by the regrowth of the GaN layer on a selective-area Si-implanted GaN template. Transmission electron microscopy shows that the InGaN/GaN epitaxial layers regrown on the THPs exhibit different growth rates and indium compositions of the InGaN layer between the c-plane and r-plane surfaces. Consequently, InGaN/GaN MQW light-emitting diodes grown on the GaN THP array emit multiple wavelengths approaching near white light.
NASA Astrophysics Data System (ADS)
Wang, Guangwei; Araki, Kenji
In this paper, we propose an improved SO-PMI (Semantic Orientation Using Pointwise Mutual Information) algorithm, for use in Japanese Weblog Opinion Mining. SO-PMI is an unsupervised approach proposed by Turney that has been shown to work well for English. When this algorithm was translated into Japanese naively, most phrases, whether positive or negative in meaning, received a negative SO. For dealing with this slanting phenomenon, we propose three improvements: to expand the reference words to sets of words, to introduce a balancing factor and to detect neutral expressions. In our experiments, the proposed improvements obtained a well-balanced result: both positive and negative accuracy exceeded 62%, when evaluated on 1,200 opinion sentences sampled from three different domains (reviews of Electronic Products, Cars and Travels from Kakaku. com). In a comparative experiment on the same corpus, a supervised approach (SA-Demo) achieved a very similar accuracy to our method. This shows that our proposed approach effectively adapted SO-PMI for Japanese, and it also shows the generality of SO-PMI.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
S.R. Hudson
2010-10-13
A method for approximately solving magnetic differential equations is described. The approach is to include a small diffusion term to the equation, which regularizes the linear operator to be inverted. The extra term allows a "source-correction" term to be defned, which is generally required in order to satisfy the solvability conditions. The approach is described in the context of computing the pressure and parallel currents in the iterative approach for computing magnetohydrodynamic equilibria. __________________________________________________
Chen, S; Wu, Y; Luk, B L
1999-01-01
The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.
NASA Astrophysics Data System (ADS)
Han, Zheng; Chen, Guangqi; Li, Yange; Wang, Wei; Zhang, Hong
2015-07-01
The estimation of debris-flow velocity in a cross-section is of primary importance due to its correlation to impact force, run up and superelevation. However, previous methods sometimes neglect the observed asymmetric velocity distribution, and consequently underestimate the debris-flow velocity. This paper presents a new approach for exploring the debris-flow velocity distribution in a cross-section. The presented approach uses an iteration algorithm based on the Riemann integral method to search an approximate solution to the unknown flow surface. The established laws for vertical velocity profile are compared and subsequently integrated to analyze the velocity distribution in the cross-section. The major benefit of the presented approach is that natural channels typically with irregular beds and superelevations can be taken into account, and the resulting approximation by the approach well replicates the direct integral solution. The approach is programmed in MATLAB environment, and the code is open to the public. A well-documented debris-flow event in Sichuan Province, China, is used to demonstrate the presented approach. Results show that the solutions of the flow surface and the mean velocity well reproduce the investigated results. Discussion regarding the model sensitivity and the source of errors concludes the paper.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Modelling Aṣṭādhyāyī: An Approach Based on the Methodology of Ancillary Disciplines (Vedāṅga)
NASA Astrophysics Data System (ADS)
Mishra, Anand
This article proposes a general model based on the common methodological approach of the ancillary disciplines (Vedāṅga) associated with the Vedas taking examples from Śikṣā, Chandas, Vyākaraṇa and Prātiśā khya texts. It develops and elaborates this model further to represent the contents and processes of Aṣṭādhyāyī. Certain key features are added to my earlier modelling of Pāṇinian system of Sanskrit grammar. This includes broader coverage of the Pāṇinian meta-language, mechanism for automatic application of rules and positioning the grammatical system within the procedural complexes of ancillary disciplines.
A lake detection algorithm (LDA) using Landsat 8 data: A comparative approach in glacial environment
NASA Astrophysics Data System (ADS)
Bhardwaj, Anshuman; Singh, Mritunjay Kumar; Joshi, P. K.; Snehmani; Singh, Shaktiman; Sam, Lydia; Gupta, R. D.; Kumar, Rajesh
2015-06-01
Glacial lakes show a wide range of turbidity. Owing to this, the normalized difference water indices (NDWIs) as proposed by many researchers, do not give appropriate results in case of glacial lakes. In addition, the sub-pixel proportion of water and use of different optical band combinations are also reported to produce varying results. In the wake of the changing climate and increasing GLOFs (glacial lake outburst floods), there is a need to utilize wide optical and thermal capabilities of Landsat 8 data for the automated detection of glacial lakes. In the present study, the optical and thermal bandwidths of Landsat 8 data were explored along with the terrain slope parameter derived from Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model Version2 (ASTER GDEM V2), for detecting and mapping glacial lakes. The validation of the algorithm was performed using manually digitized and subsequently field corrected lake boundaries. The pre-existing NDWIs were also evaluated to determine the supremacy and the stability of the proposed algorithm for glacial lake detection. Two new parameters, LDI (lake detection index) and LF (lake fraction) were proposed to comment on the performances of the indices. The lake detection algorithm (LDA) performed best in case of both, mixed lake pixels and pure lake pixels with no false detections (LDI = 0.98) and very less areal underestimation (LF = 0.73). The coefficient of determination (R2) between areal extents of lake pixels, extracted using the LDA and the actual lake area, was very high (0.99). With understanding of the terrain conditions and slight threshold adjustments, this work can be replicated for any mountainous region of the world.
Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun
2016-03-01
A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770
Lee, Timothy; Singh, Rahul; Yen, Ten-Yang; Macher, Bruce
2007-01-01
Knowledge of the pattern of disulfide linkages in a protein leads to a better understanding of its tertiary structure and biological function. At the state-of-the-art, liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) can produce spectra of the peptides in a protein that are putatively joined by a disulfide bond. In this setting, efficient algorithms are required for matching the theoretical mass spaces of all possible bonded peptide fragments to the experimentally derived spectra to determine the number and location of the disulfide bonds. The algorithmic solution must also account for issues associated with interpreting experimental data from mass spectrometry, such as noise, isotopic variation, neutral loss, and charge state uncertainty. In this paper, we propose a algorithmic approach to high-throughput disulfide bond identification using data from mass spectrometry, that addresses all the aforementioned issues in a unified framework. The complexity of the proposed solution is of the order of the input spectra. The efficacy and efficiency of the method was validated using experimental data derived from proteins with with diverse disulfide linkage patterns.
Maximo, Guilherme J; Costa, Mariana C; Meirelles, Antonio J A
2014-08-21
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the "Crystal-T algorithm". Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Berkolaiko, G.; Kuipers, J.
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, or retrieving relevant data for the user.
Leckenby, J I; Ghali, S; Butler, D P; Grobbelaar, A O
2015-05-01
Facial palsy patients suffer an array of problems ranging from functional to psychological issues. With regard to the eye, lacrimation, lagophthalmos and the inability to spontaneously blink are the main symptoms and if left untreated can compromise the cornea and vision. There are a multitude of treatment modalities available and the surgeon has the challenging prospect of choosing the correct intervention to yield the best outcome for a patient. The accurate assessment of the eye in facial paralysis is described and by approaching the brow and the eye separately the treatment options and indications are discussed having been broken down into static and dynamic modalities. Based on our unit's experience of more than 35 years and 1000 cases of facial palsy, we have developed a detailed approach to help manage these patients optimally. The aim of this article is to provide the reader with a systematic algorithm that can be used when consulting a patient with eye problems associated with facial palsy.
A guided search genetic algorithm using mined rules for optimal affective product design
NASA Astrophysics Data System (ADS)
Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.
2014-08-01
Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.
Pleissner, K P; Hoffmann, F; Kriegel, K; Wenk, C; Wegner, S; Sahlström, A; Oswald, H; Alt, H; Fleck, E
1999-01-01
Protein spot identification in two-dimensional electrophoresis gels can be supported by the comparison of gel images accessible in different World Wide Web two-dimensional electrophoresis (2-DE) gel protein databases. The comparison may be performed either by visual cross-matching between gel images or by automatic recognition of similar protein spot patterns. A prerequisite for the automatic point pattern matching approach is the detection of protein spots yielding the x(s),y(s) coordinates and integrated spot intensities i(s). For this purpose an algorithm is developed based on a combination of hierarchical watershed transformation and feature extraction methods. This approach reduces the strong over-segmentation of spot regions normally produced by watershed transformation. Measures for the ellipticity and curvature are determined as features of spot regions. The resulting spot lists containing x(s),y(s),i(s)-triplets are calculated for a source as well as for a target gel image accessible in 2-DE gel protein databases. After spot detection a matching procedure is applied. Both the matching of a local pattern vs. a full 2-DE gel image and the global matching between full images are discussed. Preset slope and length tolerances of pattern edges serve as matching criteria. The local matching algorithm relies on a data structure derived from the incremental Delaunay triangulation of a point set and a two-step hashing technique. For the incremental construction of triangles the spot intensities are considered in decreasing order. The algorithm needs neither landmarks nor an a priori image alignment. A graphical user interface for spot detection and gel matching is written in the Java programming language for the Internet. The software package called CAROL (http://gelmatching.inf.fu-berlin.de) is realized in a client-server architecture.
NASA Astrophysics Data System (ADS)
Niaki, Seyed Taghi Akhavan; Javad Ershadi, Mohammad
2012-12-01
In this research, the main parameters of the multivariate cumulative sum (CUSUM) control chart (the reference value k, the control limit H, the sample size n and the sampling interval h) are determined by minimising the Lorenzen-Vance cost function [Lorenzen, T.J., and Vance, L.C. (1986), 'The Economic Design of Control Charts: A Unified Approach', Technometrics, 28, 3-10], in which the external costs of employing the chart are added. In addition, the model is statistically constrained to achieve desired in-control and out-of-control average run lengths. The Taguchi loss approach is used to model the problem and a genetic algorithm, for which its main parameters are tuned using the response surface methodology (RSM), is proposed to solve it. At the end, sensitivity analyses on the main parameters of the cost function are presented and their practical conclusions are drawn. The results show that RSM significantly improves the performance of the proposed algorithm and the external costs of applying the chart, which are due to real-world constraints, do not increase the average total loss very much.
NASA Astrophysics Data System (ADS)
Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.
2016-05-01
Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.
Utilisation of GaN and InGaN/GaN with nanoporous structures for water splitting
Benton, J.; Bai, J.; Wang, T.
2014-12-01
We report a cost-effective approach to the fabrication of GaN based nanoporous structure for applications in renewable hydrogen production. Photoelectrochemical etching in a KOH solution has been employed to fabricate both GaN and InGaN/GaN nanoporous structures with pore sizes ranging from 25 to 60 nm, obtained by controlling both etchant concentration and applied voltage. Compared to as-grown planar devices the nanoporous structures have exhibited a significant increase of photocurrent with a factor of up to four times. An incident photon conversion efficiency of up to 46% around the band edge of GaN has been achieved.
A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Chae, Han Gil
Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the
Wong, Brian J. F.; Karmi, Koohyar; Devcic, Zlatko; McLaren, Christine E.; Chen, Wen-Pin
2013-01-01
Objectives The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Study Design Basic research study incorporating focus group evaluations. Methods Digital images were acquired of 250 female volunteers (18–25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18–25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cosmetology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. Results The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (±0.73), 5.50 (±0.62), 6.23 (±0.31), and 6.39 (±0.24) for P and F1–F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
NASA Astrophysics Data System (ADS)
Keilis-Borok, V. I.; Soloviev, A.; Gabrielov, A.
2011-12-01
We describe a uniform approach to predicting different extreme events, also known as critical phenomena, disasters, or crises. The following types of such events are considered: strong earthquakes; economic recessions (their onset and termination); surges of unemployment; surges of crime; and electoral changes of the governing party. A uniform approach is possible due to the common feature of these events: each of them is generated by a certain hierarchical dissipative complex system. After a coarse-graining, such systems exhibit regular behavior patterns; we look among them for "premonitory patterns" that signal the approach of an extreme event. We introduce methodology, based on the optimal control theory, assisting disaster management in choosing optimal set of disaster preparedness measures undertaken in response to a prediction. Predictions with their currently realistic (limited) accuracy do allow preventing a considerable part of the damage by a hierarchy of preparedness measures. Accuracy of prediction should be known, but not necessarily high.
ERIC Educational Resources Information Center
Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.
1997-01-01
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
von Wenckstern, H; Splith, D; Werner, A; Müller, S; Lorenz, M; Grundmann, M
2015-12-14
We investigated properties of an (In(x)Ga(1-x))2O3 thin film with laterally varying cation composition that was realized by a large-area offset pulsed laser deposition approach. Within a two inch diameter thin film, the composition varies between 0.01 ≤ x ≤ 0.85, and three crystallographic phases (cubic, hexagonal, and monoclinic) were identified. We observed a correlation between characteristic parameters of Schottky barrier diodes fabricated on the thin film and its chemical and structural material properties. The highest Schottky barriers and rectification of the diodes were found for low indium contents. The thermal stability of the diodes is also best for Ga-rich parts of the sample. Conversely, the series resistance is lowest for large In content. Overall, the (In(x)Ga(1-x))2O3 alloy is well-suited for potential applications such as solar-blind photodetectors with a tunable absorption edge.
NASA Technical Reports Server (NTRS)
Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)
2002-01-01
This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.
Solitro, Giovanni F; Amirouche, Farid
2016-04-01
Pedicle screws are typically used for fusion, percutaneous fixation, and means of gripping a spinal segment. The screws act as a rigid and stable anchor points to bridge and connect with a rod as part of a construct. The foundation of the fusion is directly related to the placement of these screws. Malposition of pedicle screws causes intraoperative complications such as pedicle fractures and dural lesions and is a contributing factor to fusion failure. Computer assisted spine surgery (CASS) and patient-specific drill templates were developed to reduce this failure rate, but the trajectory of the screws remains a decision driven by anatomical landmarks often not easily defined. Current data shows the need of a robust and reliable technique that prevents screw misplacement. Furthermore, there is a need to enhance screw insertion guides to overcome the distortion of anatomical landmarks, which is viewed as a limiting factor by current techniques. The objective of this study is to develop a method and mathematical lemmas that are fundamental to the development of computer algorithms for pedicle screw placement. Using the proposed methodology, we show how we can generate automated optimal safe screw insertion trajectories based on the identification of a set of intrinsic parameters. The results, obtained from the validation of the proposed method on two full thoracic segments, are similar to previous morphological studies. The simplicity of the method, being pedicle arch based, is applicable to vertebrae where landmarks are either not well defined, altered or distorted. PMID:26922675
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2012-01-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2011-12-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
NASA Astrophysics Data System (ADS)
Murthy, T. V. Rama; Krishnan, A.; Ajay, S.
1991-10-01
A current trend in fault detection and identification (FDI) is to use a combination of algorithmic procedural methods and artificial intelligence (AI) techniques. A method for monitoring faulty sensors in aircraft electrical actuators is presented. This method is based on static model based fault detection (MBFD) and parameter estimation based on on-line recursive least squares (RLS). Investigations were made of MBFD where measured variables were checked for consistency with those of the model. In the case study, owing to the wide variation of the poles of the overall transfer function, a reduced order model was chosen to model the process. Fault detection is also achieved by estimating the reduced order model parameters in the closed loop by RLS. It is found that RLS is reliable only on the addition of a PRBS dither signal to the input excitation. The combination of the two methods improves the fault detection probability while simultaneously reducing the false alarm probability. This yields a higher confidence level in the fault monitoring, especially in the context of the response of a reduced order model.
Kyunghee Jung; Hoyong Kim; Yunseok Ko . Dept. of Distribution System)
1993-10-01
This study develops an expert system to solve the problem of the main transformer (MTr) or feeder overload and the feeder constraint violation in automated distribution systems, where each feeder is subject to the thermal overload and voltage drop limits. Then, the objective is to perform the network reconfiguration by switching the tie and sectionalizing switches so that the system violation is removed, while achieving the load balance of the MTrs and feeders with a fewer number of switching operations. Since the switching operation in the practical system does not cause the large change in the voltage, an approximation method is used in order to check the voltage violation, instead of a full ac load flow solution. To reduce the search space, an expert system based on heuristic rules is presented, and implemented in Prolog. This system adopts the best-first tree search technique. List processing and recursive programming techniques are then utilized to solve the combinatorial type optimization problem. The computational results are also prepared to show the performance of the heuristic algorithms developed.
NASA Astrophysics Data System (ADS)
Hashemi-Dezaki, Hamed; Mohammadalizadeh-Shabestary, Masoud; Askarian-Abyaneh, Hossein; Rezaei-Jegarluei, Mohammad
2014-01-01
In electrical distribution systems, a great amount of power are wasting across the lines, also nowadays power factors, voltage profiles and total harmonic distortions (THDs) of most loads are not as would be desired. So these important parameters of a system play highly important role in wasting money and energy, and besides both consumers and sources are suffering from a high rate of distortions and even instabilities. Active power filters (APFs) are innovative ideas for solving of this adversity which have recently used instantaneous reactive power theory. In this paper, a novel method is proposed to optimize the allocation of APFs. The introduced method is based on the instantaneous reactive power theory in vectorial representation. By use of this representation, it is possible to asses different compensation strategies. Also, APFs proper placement in the system plays a crucial role in either reducing the losses costs and power quality improvement. To optimize the APFs placement, a new objective function has been defined on the basis of five terms: total losses, power factor, voltage profile, THD and cost. Genetic algorithm has been used to solve the optimization problem. The results of applying this method to a distribution network illustrate the method advantages.
Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.
Leylavi Shoushtari, Ali
2016-01-01
Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.
Phase Reconstruction from FROG Using Genetic Algorithms[Frequency-Resolved Optical Gating
Omenetto, F.G.; Nicholson, J.W.; Funk, D.J.; Taylor, A.J.
1999-04-12
The authors describe a new technique for obtaining the phase and electric field from FROG measurements using genetic algorithms. Frequency-Resolved Optical Gating (FROG) has gained prominence as a technique for characterizing ultrashort pulses. FROG consists of a spectrally resolved autocorrelation of the pulse to be measured. Typically a combination of iterative algorithms is used, applying constraints from experimental data, and alternating between the time and frequency domain, in order to retrieve an optical pulse. The authors have developed a new approach to retrieving the intensity and phase from FROG data using a genetic algorithm (GA). A GA is a general parallel search technique that operates on a population of potential solutions simultaneously. Operators in a genetic algorithm, such as crossover, selection, and mutation are based on ideas taken from evolution.
Adaptive MANET multipath routing algorithm based on the simulated annealing approach.
Kim, Sungwook
2014-01-01
Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.
An Intelligent Model for Pairs Trading Using Genetic Algorithms
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236
An Intelligent Model for Pairs Trading Using Genetic Algorithms.
Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An
2015-01-01
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236
Chou, Ting-Chao
2011-01-01
The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.
NASA Astrophysics Data System (ADS)
Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.
2015-09-01
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
NASA Astrophysics Data System (ADS)
Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.
2014-09-01
Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.
Initial evaluation of a child with arthritis--an algorithmic approach.
Khubchandani, R P; D'Souza, Susan
2002-10-01
Arthritis is one of the less common, yet challenging problems that may confront a pediatrician. The potential pathology is diverse ranging from the benign with a good prognosis to the serious and ultimately fatal. From a spot diagnosis to conditions that evolve over time, few other conditions challenge and stimulate the clinical acumen. Several diagnoses can be made clinically with the laboratory investigations providing additional support. An approach through which the clinician seeks answers through a logical sequence of questions is presented to help shortlist the possible diagnosis.
OPC recipe optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Asthana, Abhishek; Wilkinson, Bill; Power, Dave
2016-03-01
Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon
2016-01-01
The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.
NASA Astrophysics Data System (ADS)
Darne, Chinmay; Lu, Yujie; Sevick-Muraca, Eva M.
2014-01-01
Emerging fluorescence and bioluminescence tomography approaches have several common, yet several distinct features from established emission tomographies of PET and SPECT. Although both nuclear and optical imaging modalities involve counting of photons, nuclear imaging techniques collect the emitted high energy (100-511 keV) photons after radioactive decay of radionuclides while optical techniques count low-energy (1.5-4.1 eV) photons that are scattered and absorbed by tissues requiring models of light transport for quantitative image reconstruction. Fluorescence imaging has been recently translated into clinic demonstrating high sensitivity, modest tissue penetration depth, and fast, millisecond image acquisition times. As a consequence, the promise of quantitative optical tomography as a complement of small animal PET and SPECT remains high. In this review, we summarize the different instrumentation, methodological approaches and schema for inverse image reconstructions for optical tomography, including luminescence and fluorescence modalities, and comment on limitations and key technological advances needed for further discovery research and translation.
mRAISE: an alternative algorithmic approach to ligand-based virtual screening.
von Behren, Mathias M; Bietz, Stefan; Nittinger, Eva; Rarey, Matthias
2016-08-01
Ligand-based virtual screening is a well established method to find new lead molecules in todays drug discovery process. In order to be applicable in day to day practice, such methods have to face multiple challenges. The most important part is the reliability of the results, which can be shown and compared in retrospective studies. Furthermore, in the case of 3D methods, they need to provide biologically relevant molecular alignments of the ligands, that can be further investigated by a medicinal chemist. Last but not least, they have to be able to screen large databases in reasonable time. Many algorithms for ligand-based virtual screening have been proposed in the past, most of them based on pairwise comparisons. Here, a new method is introduced called mRAISE. Based on structural alignments, it uses a descriptor-based bitmap search engine (RAISE) to achieve efficiency. Alignments created on the fly by the search engine get evaluated with an independent shape-based scoring function also used for ranking of compounds. The correct ranking as well as the alignment quality of the method are evaluated and compared to other state of the art methods. On the commonly used Directory of Useful Decoys dataset mRAISE achieves an average area under the ROC curve of 0.76, an average enrichment factor at 1 % of 20.2 and an average hit rate at 1 % of 55.5. With these results, mRAISE is always among the top performing methods with available data for comparison. To access the quality of the alignments calculated by ligand-based virtual screening methods, we introduce a new dataset containing 180 prealigned ligands for 11 diverse targets. Within the top ten ranked conformations, the alignment closest to X-ray structure calculated with mRAISE has a root-mean-square deviation of less than 2.0 Å for 80.8 % of alignment pairs and achieves a median of less than 2.0 Å for eight of the 11 cases. The dataset used to rate the quality of the calculated alignments is freely available
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
GaAsP solar cells on GaP/Si with low threading dislocation density
NASA Astrophysics Data System (ADS)
Yaung, Kevin Nay; Vaisman, Michelle; Lang, Jordan; Lee, Minjoo Larry
2016-07-01
GaAsP on Si tandem cells represent a promising path towards achieving high efficiency while leveraging the Si solar knowledge base and low-cost infrastructure. However, dislocation densities exceeding 108 cm-2 in GaAsP cells on Si have historically hampered the efficiency of such approaches. Here, we report the achievement of low threading dislocation density values of 4.0-4.6 × 106 cm-2 in GaAsP solar cells on GaP/Si, comparable with more established metamorphic solar cells on GaAs. Our GaAsP solar cells on GaP/Si exhibit high open-circuit voltage and quantum efficiency, allowing them to significantly surpass the power conversion efficiency of previous devices. The results in this work show a realistic path towards dual-junction GaAsP on Si cells with efficiencies exceeding 30%.
NASA Astrophysics Data System (ADS)
Povoleri, A.; Lavagna, M.; Finzi, A. E.
The paper presents a new approach to deal with the preliminary space mission analysis design of particularly complex trajectories focused on interplanetary targets. According to an optimisation approach, a multi-objective strategy is selected on a mixed continuous and discrete state variables domain in order to deal with possible multi-gravity assist manoeuvres (GAM) as further degrees of freedom of the problem, in terms of both number and planets sequence selection to minimize both the ?v expense and the time trip time span. A further added value to the proposed algorithm stays in that, according to planets having an atmosphere, aero-gravity assist manoeuvres (AGAM) are considered too within the overall mission design optimisation, and the consequent optimal control problem related to the aerodynamic angles history, is solved. According to the target planet different capture strategies are managed by the algorithm, the aerocapture manoeuvre too, whenever possible (e.g. Venus, Mars target planets). In order not to be trapped in local solution the Evolutionary Algorithms (EAs) have been selected to solve such a complex problem. Simulations and comparison with already designed space missions showed the ability of the proposed architecture in correctly selecting both the sequences and the planets type of either GAMs or AGAMs to optimise the selected criteria vector, in a multidisciplinary environment, switching on the optimal control problem whenever the atmospheric interaction is involved in the optimisation by the search process. Symbols δ = semi-angular deviation for GAM between the v∞ -, v∞ + inoutcoming vectors [rad] φ = Angular deviation for AGAM between the v∞ -, v∞ + inoutcoming vectors [rad] ρ = Atmospheric density [kgm-3 ] γ = Flight path angle [rad] µ = Bank angle [rad] δ?ttransf j = j-th heliocentric transfer time variation with respect to the linked conics solution ?|v∞| = Relative velocity losses because of drag [ms-1 ] ωI = i
NASA Technical Reports Server (NTRS)
Mattar, F. P.; Teichmann, J.; Bissonnette, L. R.; Maccormack, R. W.
1979-01-01
The paper presents a three-dimensional analysis of the nonlinear light matter interaction in a hydrodynamic context. It is reported that the resulting equations are a generalization of the Navier-Stokes equations subjected to an internal potential which depends solely upon the fluid density. In addition, three numerical approaches are presented to solve the governing equations using an extension of McCormack predict-corrector scheme. These are a uniform grid, a dynamic rezoned grid, and a splitting technique. It is concluded that the use of adaptive mapping and splitting techniques with McCormack two-level predictor-corrector scheme results in an efficient and reliable code whose storage requirements are modest compared with other second order methods of equal accuracy.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search
Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting
2016-01-01
Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus, which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge. PMID:27660763
Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Jung, Yoon; Lee, Hanbong; Schier, Sebastian; Okuniek, Nikolai; Gerdes, Ingrid
2016-01-01
In this work, fast-time simulations have been conducted using SARDA tools at Hamburg airport by NASA and real-time simulations using CADEO and TRACC with the NLR ATM Research Simulator (NARSIM) by DLR. The outputs are analyzed using a set of common metrics collaborated between DLR and NASA. The proposed metrics are derived from International Civil Aviation Organization (ICAO)s Key Performance Areas (KPAs) in capability, efficiency, predictability and environment, and adapted to simulation studies. The results are examined to explore and compare the merits and shortcomings of the two approaches using the common performance metrics. Particular attention is paid to the concept of the close-loop, trajectory-based taxi as well as the application of US concept to the European airport. Both teams consider the trajectory-based surface operation concept a critical technology advance in not only addressing the current surface traffic management problems, but also having potential application in unmanned vehicle maneuver on airport surface, such as autonomous towing or TaxiBot [6][7] and even Remote Piloted Aircraft (RPA). Based on this work, a future integration of TRACC and SOSS is described aiming at bringing conflict-free trajectory-based operation concept to US airport.
Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting
2016-01-01
Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus, which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-01-01
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle. PMID:20161234
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval.
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-08-15
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle.
Kumar, Sanjiv; Puniya, Bhanwar Lal; Parween, Shahila; Nahar, Pradip; Ramachandran, Srinivasan
2013-01-01
Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA), Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein) binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase) and Rv0309 (L,D-transpeptidase) bind to fibronectin and laminin. We report Rv2599 (membrane protein), Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.
Leckenby, J I; Ghali, S; Butler, D P; Grobbelaar, A O
2015-05-01
Facial palsy patients suffer an array of problems ranging from functional to psychological issues. With regard to the eye, lacrimation, lagophthalmos and the inability to spontaneously blink are the main symptoms and if left untreated can compromise the cornea and vision. There are a multitude of treatment modalities available and the surgeon has the challenging prospect of choosing the correct intervention to yield the best outcome for a patient. The accurate assessment of the eye in facial paralysis is described and by approaching the brow and the eye separately the treatment options and indications are discussed having been broken down into static and dynamic modalities. Based on our unit's experience of more than 35 years and 1000 cases of facial palsy, we have developed a detailed approach to help manage these patients optimally. The aim of this article is to provide the reader with a systematic algorithm that can be used when consulting a patient with eye problems associated with facial palsy. PMID:25656336
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1999-06-01
A few automated data acquisition and processing systems operate on mainframes, some are run on UNIX-based workstations and others on personal computer, equipped with either DOS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years mainly for UNIX-based systems, and some of these programs use a variety of artificial intelligence techniques. Here, the first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented. This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data processing run on a personal computer. In particular, we discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data Processing) module. A multi-algorithm approach to the on-line detection and location of local earthquakes is adopted in ASDP, and its operative mode is similar to that used in more complex systems, where the algorithms run on different processors and parallel computations are generally performed. Since highly complex computation routines may still be prohibitive for current PC when the number of analyzing traces becomes large, we have opted for simplicity and have planned three main routines (working in cascade mode) and a multi-station analysis (MSA) procedure in ASDP, to recognize phase picking, declare and locate earthquakes. Basically, signal detection on a single-component trace is obtained by a short-term average to long-term average ratio (STA/LTA) taken along a characteristic function (CF) envelope generated from the seismogram. To confirm and identify earthquake phase arrivals and to discard noise disturbances, two other sections of analysis are applied on short signal windows after the declared triggers. A spectral analysis is applied as detector of earthquake phase
NASA Astrophysics Data System (ADS)
Borovkov, Alexei I.; Misnik, Yuri Y.
1999-05-01
This paper presents new approach to the fracture analysis of laminated composite structures (laminates). The first part of the paper is devoted to the general algorithm, which allows to obtain critical stresses for any structure considering only the strip made from the same laminate. The algorithm is based on the computation of the energy release rates for all three crack modes and allows to obtain macro-failure parameters such as critical stresses through the micro-fracture characteristics. The developed algorithm is also based on the locality principle in mechanics of composite structures and sequential heterogenization method. The algorithm can be applied both for classical models of laminates with homogenous layers and new 3D finite element (FE) models of interfacial cracks in multidirectional composite structures. The results of multilevel, multimodel and multivariant analysis of 3D delamination problems with detailed microstructure in the crack tip zone are presented.
NASA Astrophysics Data System (ADS)
Herrera, Kathleen Kate
In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo
Bouc-Wen model parameter identification for a MR fluid damper using computationally efficient GA.
Kwok, N M; Ha, Q P; Nguyen, M T; Li, J; Samali, B
2007-04-01
A non-symmetrical Bouc-Wen model is proposed in this paper for magnetorheological (MR) fluid dampers. The model considers the effect of non-symmetrical hysteresis which has not been taken into account in the original Bouc-Wen model. The model parameters are identified with a Genetic Algorithm (GA) using its flexibility in identification of complex dynamics. The computational efficiency of the proposed GA is improved with the absorption of the selection stage into the crossover and mutation operations. Crossover and mutation are also made adaptive to the fitness values such that their probabilities need not be user-specified. Instead of using a sufficiently number of generations or a pre-determined fitness value, the algorithm termination criterion is formulated on the basis of a statistical hypothesis test, thus enhancing the performance of the parameter identification. Experimental test data of the damper displacement and force are used to verify the proposed approach with satisfactory parameter identification results. PMID:17349644
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali
2006-03-01
An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-12-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Improved interpretation of satellite altimeter data using genetic algorithms
NASA Technical Reports Server (NTRS)
Messa, Kenneth; Lybanon, Matthew
1992-01-01
Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.
Ban, Hiroshi; Yamamoto, Hiroki
2013-01-01
In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.
Ban, Hiroshi; Yamamoto, Hiroki
2013-01-01
In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free. PMID:23729771
NASA Astrophysics Data System (ADS)
Vanderstraeten, Barbara; DeGersem, Werner; Duthoy, Wim; DeNeve, Wilfried; Thierens, Hubert
2006-08-01
The development of new biological imaging technologies offers the opportunity to further individualize radiotherapy. Biologically conformal radiation therapy (BCRT) implies the use of the spatial distribution of one or more radiobiological parameters to guide the IMRT dose prescription. Our aim was to implement BCRT in an algorithmic segmentation-based planning approach. A biology-based segmentation tool was developed to generate initial beam segments that reflect the biological signal intensity pattern. The weights and shapes of the initial segments are optimized by means of an objective function that minimizes the root mean square deviation between the actual and intended dose values within the PTV. As proof of principle, [18F]FDG-PET-guided BCRT plans for two different levels of dose escalation were created for an oropharyngeal cancer patient. Both plans proved to be dosimetrically feasible without violating the planning constraints for the expanded spinal cord and the contralateral parotid gland as organs at risk. The obtained biological conformity was better for the first (2.5 Gy per fraction) than for the second (3 Gy per fraction) dose escalation level.
Singh, S; Modi, S; Bagga, D; Kaur, P; Shankar, L R; Khushu, S
2013-03-01
The present study aimed to investigate whether brain morphological differences exist between adult hypothyroid subjects and age-matched controls using voxel-based morphometry (VBM) with diffeomorphic anatomic registration via an exponentiated lie algebra algorithm (DARTEL) approach. High-resolution structural magnetic resonance images were taken in ten healthy controls and ten hypothyroid subjects. The analysis was conducted using statistical parametric mapping. The VBM study revealed a reduction in grey matter volume in the left postcentral gyrus and cerebellum of hypothyroid subjects compared to controls. A significant reduction in white matter volume was also found in the cerebellum, right inferior and middle frontal gyrus, right precentral gyrus, right inferior occipital gyrus and right temporal gyrus of hypothyroid patients compared to healthy controls. Moreover, no meaningful cluster for greater grey or white matter volume was obtained in hypothyroid subjects compared to controls. Our study is the first VBM study of hypothyroidism in an adult population and suggests that, compared to controls, this disorder is associated with differences in brain morphology in areas corresponding to known functional deficits in attention, language, motor speed, visuospatial processing and memory in hypothyroidism.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Ultra-Thin, Triple-Bandgap GaInP/GaAs/GaInAs Monolithic Tandem Solar Cells
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, S.; Moriarty, T.; Romero, M. J.
2007-02-01
The performance of state-of-the-art, series-connected, lattice-matched (LM), triple-junction (TJ), III-V tandem solar cells could be improved substantially (10-12%) by replacing the Ge bottom subcell with a subcell having a bandgap of {approx}1 eV. For the last several years, research has been conducted by a number of organizations to develop {approx}1-eV, LM GaInAsN to provide such a subcell, but, so far, the approach has proven unsuccessful. Thus, the need for a high-performance, monolithically integrable, 1-eV subcell for TJ tandems has remained. In this paper, we present a new TJ tandem cell design that addresses the above-mentioned problem. Our approach involves inverted epitaxial growth to allow the monolithic integration of a lattice-mismatched (LMM) {approx}1-eV GaInAs/GaInP double-heterostructure (DH) bottom subcell with LM GaAs (middle) and GaInP (top) upper subcells. A transparent GaInP compositionally graded layer facilitates the integration of the LM and LMM components. Handle-mounted, ultra-thin device fabrication is a natural consequence of the inverted-structure approach, which results in a number of advantages, including robustness, potential low cost, improved thermal management, incorporation of back-surface reflectors, and possible reclamation/reuse of the parent crystalline substrate for further cost reduction. Our initial work has concerned GaInP/GaAs/GaInAs tandem cells grown on GaAs substrates. In this case, the 1-eV GaInAs experiences 2.2% compressive LMM with respect to the substrate. Specially designed GaInP graded layers are used to produce 1-eV subcells with performance parameters nearly equaling those of LM devices with the same bandgap (e.g., LM, 1-eV GaInAsP grown on InP). Previously, we reported preliminary ultra-thin tandem devices (0.237 cm{sup 2}) with NREL-confirmed efficiencies of 31.3% (global spectrum, one sun) (1), 29.7% (AM0 spectrum, one sun) (2), and 37.9% (low-AOD direct spectrum, 10.1 suns) (3), all at 25 C. Here, we
Ultra-Thin, Triple-Bandgap GaInP/GaAs/GaInAs Monolithic Tandem Solar Cells
NASA Technical Reports Server (NTRS)
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, Sarah; Moriarty, T.; Romero, M. J.
2007-01-01
The performance of state-of-the-art, series-connected, lattice-matched (LM), triple-junction (TJ), III-V tandem solar cells could be improved substantially (10-12%) by replacing the Ge bottom subcell with a subcell having a bandgap of approx.1 eV. For the last several years, research has been conducted by a number of organizations to develop approx.1-eV, LM GaInAsN to provide such a subcell, but, so far, the approach has proven unsuccessful. Thus, the need for a high-performance, monolithically integrable, 1-eV subcell for TJ tandems has remained. In this paper, we present a new TJ tandem cell design that addresses the above-mentioned problem. Our approach involves inverted epitaxial growth to allow the monolithic integration of a lattice-mismatched (LMM) approx.1- eV GaInAs/GaInP double-heterostructure (DH) bottom subcell with LM GaAs (middle) and GaInP (top) upper subcells. A transparent GaInP compositionally graded layer facilitates the integration of the LM and LMM components. Handle-mounted, ultra-thin device fabrication is a natural consequence of the inverted-structure approach, which results in a number of advantages, including robustness, potential low cost, improved thermal management, incorporation of back-surface reflectors, and possible reclamation/reuse of the parent crystalline substrate for further cost reduction. Our initial work has concerned GaInP/GaAs/GaInAs tandem cells grown on GaAs substrates. In this case, the 1- eV GaInAs experiences 2.2% compressive LMM with respect to the substrate. Specially designed GaInP graded layers are used to produce 1-eV subcells with performance parameters nearly equaling those of LM devices with the same bandgap (e.g., LM, 1-eV GaInAsP grown on InP). Previously, we reported preliminary ultra-thin tandem devices (0.237 cm2) with NREL-confirmed efficiencies of 31.3% (global spectrum, one sun) (1), 29.7% (AM0 spectrum, one sun) (2), and 37.9% (low-AOD direct spectrum, 10.1 suns) (3), all at 25 C. Here, we include
NASA Astrophysics Data System (ADS)
Tarnawska, L.; Giussani, A.; Zaumseil, P.; Schubert, M. A.; Paszkiewicz, R.; Brandt, O.; Storck, P.; Schroeder, T.
2010-09-01
The preparation of GaN virtual substrates on Si wafers via buffer layers is intensively pursued for high power/high frequency electronics as well as optoelectronics applications. Here, GaN is integrated on the Si platform by a novel engineered bilayer oxide buffer, namely, Sc2O3/Y2O3, which gradually reduces the lattice misfit of ˜-17% between GaN and Si. Single crystalline GaN(0001)/Sc2O3(111)/Y2O3(111)/Si(111) heterostructures were prepared by molecular beam epitaxy and characterized ex situ by various techniques. Laboratory-based x-ray diffraction shows that the epitaxial Sc2O3 grows fully relaxed on the Y2O3/Si(111) support, creating a high quality template for subsequent GaN overgrowth. The high structural quality of the Sc2O3 film is demonstrated by the fact that the concentration of extended planar defects in the preferred {111} slip planes is below the detection limit of synchrotron based diffuse x-ray scattering studies. Transmission electron microscopy (TEM) analysis reveal that the full relaxation of the -7% lattice misfit between the isomorphic oxides is achieved by a network of misfit dislocations at the Sc2O3/Y2O3 interface. X-ray reflectivity and TEM prove that closed epitaxial GaN layers as thin as 30 nm can be grown on these templates. Finally, the GaN thin film quality is studied using a detailed Williamson-Hall analysis.
Ligand "Brackets" for Ga-Ga Bond.
Fedushkin, Igor L; Skatova, Alexandra A; Dodonov, Vladimir A; Yang, Xiao-Juan; Chudakova, Valentina A; Piskunov, Alexander V; Demeshko, Serhiy; Baranov, Evgeny V
2016-09-01
The reactivity of digallane (dpp-Bian)Ga-Ga(dpp-Bian) (1) (dpp-Bian = 1,2-bis[(2,6-diisopropylphenyl)imino]acenaphthene) toward acenaphthenequinone (AcQ), sulfur dioxide, and azobenzene was investigated. The reaction of 1 with AcQ in 1:1 molar ratio proceeds via two-electron reduction of AcQ to give (dpp-Bian)Ga(μ2-AcQ)Ga(dpp-Bian) (2), in which diolate [AcQ](2-) acts as "bracket" for the Ga-Ga bond. The interaction of 1 with AcQ in 1:2 molar ratio proceeds with an oxidation of the both dpp-Bian ligands as well as of the Ga-Ga bond to give (dpp-Bian)Ga(μ2-AcQ)2Ga(dpp-Bian) (3). At 330 K in toluene complex 2 decomposes to give compounds 3 and 1. The reaction of complex 2 with atmospheric oxygen results in oxidation of a Ga-Ga bond and affords (dpp-Bian)Ga(μ2-AcQ)(μ2-O)Ga(dpp-Bian) (4). The reaction of digallane 1 with SO2 produces, depending on the ratio (1:2 or 1:4), dithionites (dpp-Bian)Ga(μ2-O2S-SO2)Ga(dpp-Bian) (5) and (dpp-Bian)Ga(μ2-O2S-SO2)2Ga(dpp-Bian) (6). In compound 5 the Ga-Ga bond is preserved and supported by dithionite dianionic bracket. In compound 6 the gallium centers are bridged by two dithionite ligands. Both 5 and 6 consist of dpp-Bian radical anionic ligands. Four-electron reduction of azobenzene with 1 mol equiv of digallane 1 leads to complex (dpp-Bian)Ga(μ2-NPh)2Ga(dpp-Bian) (7). Paramagnetic compounds 2-7 were characterized by electron spin resonance spectroscopy, and their molecular structures were established by single-crystal X-ray analysis. Magnetic behavior of compounds 2, 5, and 6 was investigated by superconducting quantum interference device technique in the range of 2-295 K. PMID:27548713
Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R
2013-09-01
Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology - the gene switch and the Griffith model of a genetic oscillator—and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them. PMID:26930199
Optimal groundwater remediation using artificial neural networks and the genetic algorithm
Rogers, L.L.
1992-08-01
An innovative computational approach for the optimization of groundwater remediation is presented which uses artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict an aspect of the outcome of a flow and transport simulation. Then the GA searches through realizations or patterns of pumping and uses the trained network to predict the outcome of the realizations. This approach has advantages of parallel processing of the groundwater simulations and the ability to ``recycle`` or reuse the base of knowledge formed by these simulations. These advantages offer reduction of computational burden of the groundwater simulations relative to a more conventional approach which uses nonlinear programming (NLP) with a quasi-newtonian search. Also the modular nature of this approach facilitates substitution of different groundwater simulation models.
NASA Astrophysics Data System (ADS)
Chen, Fang; Chang, Honglong; Yuan, Weizheng; Wilcock, Reuben; Kraft, Michael
2012-10-01
This paper describes a novel multiobjective parameter optimization method based on a genetic algorithm (GA) for the design of a sixth-order continuous-time, force feedback band-pass sigma-delta modulator (BP-ΣΔM) interface for the sense mode of a MEMS gyroscope. The design procedure starts by deriving a parameterized Simulink model of the BP-ΣΔM gyroscope interface. The system parameters are then optimized by the GA. Consequently, the optimized design is tested for robustness by a Monte Carlo analysis to find a solution that is both optimal and robust. System level simulations result in a signal-to-noise ratio (SNR) larger than 90 dB in a bandwidth of 64 Hz with a 200° s-1 angular rate input signal; the noise floor is about -100 dBV Hz-1/2. The simulations are compared to measured data from a hardware implementation. For zero input rotation with the gyroscope operating at atmospheric pressure, the spectrum of the output bitstream shows an obvious band-pass noise shaping and a deep notch at the gyroscope resonant frequency. The noise floor of measured power spectral density (PSD) of the output bitstream agrees well with simulation of the optimized system level model. The bias stability, rate sensitivity and nonlinearity of the gyroscope controlled by an optimized BP-ΣΔM closed-loop interface are 34.15° h-1, 22.3 mV °-1 s-1, 98 ppm, respectively. This compares to a simple open-loop interface for which the corresponding values are 89° h-1, 14.3 mV °-1 s-1, 7600 ppm, and a nonoptimized BP-ΣΔM closed-loop interface with corresponding values of 60° h-1, 17 mV °-1 s-1, 200 ppm.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
Simplified 2DEG carrier concentration model for composite barrier AlGaN/GaN HEMT
Das, Palash Biswas, Dhrubes
2014-04-24
The self consistent solution of Schrodinger and Poisson equations is used along with the total charge depletion model and applied with a novel approach of composite AlGaN barrier based HEMT heterostructure. The solution leaded to a completely new analytical model for Fermi energy level vs. 2DEG carrier concentration. This was eventually used to demonstrate a new analytical model for the temperature dependent 2DEG carrier concentration in AlGaN/GaN HEMT.
An investigation of messy genetic algorithms
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley
1990-01-01
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.
Genetic algorithms for geophysical parameter inversion from altimeter data
NASA Astrophysics Data System (ADS)
Ramillien, Guillaume
2001-11-01
A new approach for inverting several geophysical parameters at the same time from altimeter and marine data by implementing genetic algorithms (GAs) is presented. These original techniques of optimization based on non-deterministic rules simulate the evolution of a population of candidate solutions for a given objective function to minimize. They offer a robust and efficient alternative to gradient techniques for non-linear parameter inversion. Here genetic algorithms are used for solving a discrete gravity problem of data associated with an undersea relief, to retrieve seven parameters at the same time: the elastic thickness, the mean ocean depth, the seamount location (longitude/latitude), its amplitude, radius and density from its observed gravity/geoid signature. This approach was also successfully used to adjust lithosphere parameters in the real case of the Rarotonga seamount [21.2°S 159.8°W] in the Southern Cook Islands region, where GA simulations provided robust estimates of these seven parameters. The GA found very realistic values for the mean ocean depth and the seamount amplitude and the precise geographical location of Rarotonga Island. Moreover, the values of elastic thickness (~14-15km) and seamount density (~2850-2870kgm-3) estimated by the GA are consistent with the ones proposed in earlier studies.
Feature Subset Selection by Estimation of Distribution Algorithms
Cantu-Paz, E
2002-01-17
This paper describes the application of four evolutionary algorithms to the identification of feature subsets for classification problems. Besides a simple GA, the paper considers three estimation of distribution algorithms (EDAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the EDAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. In contrast with previous studies, we did not find evidence to support or reject the use of EDAs for this problem.
GaInP/GaAs/GaInAs Monolithic Tandem Cells for High-Performance Solar Concentrators
Wanlass, M. W.; Ahrenkiel, S. P.; Albin, D. S.; Carapella, J. J.; Duda, A.; Emery, K.; Geisz, J. F.; Jones, K.; Kurtz, S.; Moriarty, T.; Romero, M. J.
2005-08-01
We present a new approach for ultra-high-performance tandem solar cells that involves inverted epitaxial growth and ultra-thin device processing. The additional degree of freedom afforded by the inverted design allows the monolithic integration of high-, and medium-bandgap, lattice-matched (LM) subcell materials with lower-bandgap, lattice-mismatched (LMM) materials in a tandem structure through the use of transparent compositionally graded layers. The current work concerns an inverted, series-connected, triple-bandgap, GaInP (LM, 1.87 eV)/GaAs (LM, 1.42 eV)/GaInAs (LMM, {approx}1 eV) device structure grown on a GaAs substrate. Ultra-thin tandem devices are fabricated by mounting the epiwafers to pre-metallized Si wafer handles and selectively removing the parent GaAs substrate. The resulting handle-mounted, ultra-thin tandem cells have a number of important advantages, including improved performance and potential reclamation/reuse of the parent substrate for epitaxial growth. Additionally, realistic performance modeling calculations suggest that terrestrial concentrator efficiencies in the range of 40-45% are possible with this new tandem cell approach. A laboratory-scale (0.24 cm2), prototype GaInP/GaAs/GaInAs tandem cell with a terrestrial concentrator efficiency of 37.9% at a low concentration ratio (10.1 suns) is described, which surpasses the previous world efficiency record of 37.3%.
Genetic algorithm and the application for job shop group scheduling
NASA Astrophysics Data System (ADS)
Mao, Jianzhong; Wu, Zhiming
1995-08-01
Genetic algorithm (GA) is a heuristic and random search technique mimicking nature. This paper first presents the basic principle of GA, the definition and the function of the genetic operators, and the principal character of GA. On the basis of these, the paper proposes using GA as a new solution method of the job-shop group scheduling problem, discusses the coded representation method of the feasible solution, and the particular limitation to the genetic operators.
NASA Astrophysics Data System (ADS)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.; Kern, S.; Heygster, G.; Lavergne, T.; Sørensen, A.; Saldo, R.; Dybkjær, G.; Brucker, L.; Shokr, M.
2015-09-01
Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds, and sensitivity to error sources with seasonal to inter-annual variations and potential climatic trends, such as atmospheric water vapour and water-surface roughening by wind. A selection of 13 algorithms is shown in the article to demonstrate the results. Based on the findings, a hybrid approach is suggested to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity to the mentioned error sources.
NASA Astrophysics Data System (ADS)
Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.
2016-10-01
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M
2016-09-01
Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. PMID:27251794
Bush, Keith; Cisler, Josh
2013-07-01
Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variances in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semiblind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system's state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification and observation sampling rate. Further, we compare the algorithms' performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms' performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting-state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
Optimizing remediation of an unconfined aquifer using a hybrid algorithm.
Hsiao, Chin-Tsai; Chang, Liang-Cheng
2005-01-01
We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.
First-principle natural band alignment of GaN / dilute-As GaNAs alloy
Tan, Chee-Keong Tansu, Nelson
2015-01-15
Density functional theory (DFT) calculations with the local density approximation (LDA) functional are employed to investigate the band alignment of dilute-As GaNAs alloys with respect to the GaN alloy. Conduction and valence band positions of dilute-As GaNAs alloy with respect to the GaN alloy on an absolute energy scale are determined from the combination of bulk and surface DFT calculations. The resulting GaN / GaNAs conduction to valence band offset ratio is found as approximately 5:95. Our theoretical finding is in good agreement with experimental observation, indicating the upward movements of valence band at low-As content dilute-As GaNAs are mainly responsible for the drastic reduction of the GaN energy band gap. In addition, type-I band alignment of GaN / GaNAs is suggested as a reasonable approach for future device implementation with dilute-As GaNAs quantum well, and possible type-II quantum well active region can be formed by using InGaN / dilute-As GaNAs heterostructure.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
NASA Astrophysics Data System (ADS)
Yan, Gang; Zhou, Lily L.
2006-09-01
This study presents a design strategy based on genetic algorithms (GA) for semi-active fuzzy control of structures that have magnetorheological (MR) dampers installed to prevent damage from severe dynamic loads such as earthquakes. The control objective is to minimize both the maximum displacement and acceleration responses of the structure. Interactive relationships between structural responses and input voltages of MR dampers are established by using a fuzzy controller. GA is employed as an adaptive method for design of the fuzzy controller, which is here known as a genetic adaptive fuzzy (GAF) controller. The multi-objectives are first converted to a fitness function that is used in standard genetic operations, i.e. selection, crossover, and mutation. The proposed approach generates an effective and reliable fuzzy logic control system by powerful searching and self-learning adaptive capabilities of GA. Numerical simulations for single and multiple damper cases are given to show the effectiveness and efficiency of the proposed intelligent control strategy.
Fuzzy logic and genetic algorithms for intelligent control of structures using MR dampers
NASA Astrophysics Data System (ADS)
Yan, Gang; Zhou, Lily L.
2004-07-01
Fuzzy logic control (FLC) and genetic algorithms (GA) are integrated into a new approach for the semi-active control of structures installed with MR dampers against severe dynamic loadings such as earthquakes. The interactive relationship between the structural response and the input voltage of MR dampers is established by using a fuzzy controller rather than the traditional way by introducing an ideal active control force. GA is employed as an adaptive method for optimization of parameters and for selection of fuzzy rules of the fuzzy control system, respectively. The maximum structural displacement is selected and used as the objective function to be minimized. The objective function is then converted to a fitness function to form the basis of genetic operations, i.e. selection, crossover, and mutation. The proposed integrated architecture is expected to generate an effective and reliable fuzzy control system by GA"s powerful searching and self-learning adaptive capability.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2016-10-01
Effective characterization of lithology is vital for the conceptualization of complex aquifer systems, which is a prerequisite for the development of reliable groundwater-flow and contaminant-transport models. However, such information is often limited for most groundwater basins. This study explores the usefulness and potential of a hybrid soft-computing framework; a traditional artificial neural network with gradient descent-momentum training (ANN-GDM) and a traditional genetic algorithm (GA) based ANN (ANN-GA) approach were developed and compared with a novel hybrid self-organizing map (SOM) based ANN (SOM-ANN-GA) method for the prediction of lithology at a basin scale. This framework is demonstrated through a case study involving a complex multi-layered aquifer system in India, where well-log sites were clustered on the basis of sand-layer frequencies; within each cluster, subsurface layers were reclassified into four depth classes based on the maximum drilling depth. ANN models for each depth class were developed using each of the three approaches. Of the three, the hybrid SOM-ANN-GA models were able to recognize incomplete geologic pattern more reasonably, followed by ANN-GA and ANN-GDM models. It is concluded that the hybrid soft-computing framework can serve as a promising tool for characterizing lithology in groundwater basins with missing lithologic patterns.
Ghosh, P; Bagchi, M C
2009-01-01
With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.
Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun
2015-01-01
This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity. PMID:25802512
Dongarra, J.J.; Hewitt, T.
1985-08-01
This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.
Dandekar, T; Du, F; Schirmer, R H; Schmidt, S
2001-12-01
By exploiting the rapid increase in available sequence data, the definition of medically relevant protein targets has been improved by a combination of: (i) differential genome analysis (target list): and (ii) analysis of individual proteins (target analysis). Fast sequence comparisons, data mining, and genetic algorithms further promote these procedures. Mycobacterium tuberculosis proteins were chosen as applied examples.
Bush, Keith; Cisler, Josh
2013-01-01
Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664
Cropsey, Karen L.; Jardin, Bianca; Burkholder, Greer; Clark, C. Brendan; Raper, James L.; Saag, Michael
2015-01-01
Background Smoking now represents one of the biggest modifiable risk factors for disease and mortality in PLHIV. To produce significant changes in smoking rates among this population, treatments will need to be both acceptable to the larger segment of PLHIV smokers as well as feasible to implement in busy HIV clinics. The purpose of this study was to evaluate the feasibility and effects of a novel proactive algorithm-based intervention in an HIV/AIDS clinic. Methods PLHIV smokers (N =100) were proactively identified via their electronic medical records and were subsequently randomized at baseline to receive a 12-week pharmacotherapy-based algorithm treatment or treatment as usual. Participants were tracked in-person for 12-weeks. Participants provided information on smoking behaviors and associated constructs of cessation at each follow-up session. Results The findings revealed that many smokers reported utilizing prescribed medications when provided with a supply of cessation medication as determined by an algorithm. Compared to smokers receiving treatment as usual, PLHIV smokers prescribed these medications reported more quit attempts and greater reduction in smoking. Proxy measures of cessation readiness (e.g., motivation, self-efficacy) also favored participants receiving algorithm treatment. Conclusions This algorithm-derived treatment produced positive changes across a number of important clinical markers associated with smoking cessation. Given these promising findings coupled with the brief nature of this treatment, the overall pattern of results suggests strong potential for dissemination into clinical settings as well as significant promise for further advancing clinical health outcomes in this population. PMID:26181705
Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft
NASA Technical Reports Server (NTRS)
Wells, Valana L.
1996-01-01
This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.
Tian, H; Liu, C; Gao, X D; Yao, W B
2013-03-01
Granulocyte colony-stimulating factor (G-CSF) is a cytokine widely used in cancer patients receiving high doses of chemotherapeutic drugs to prevent the chemotherapy-induced suppression of white blood cells. The production of recombinant G-CSF should be increased to meet the increasing market demand. This study aims to model and optimize the carbon source of auto-induction medium to enhance G-CSF production using artificial neural networks coupled with genetic algorithm. In this approach, artificial neural networks served as bioprocess modeling tools, and genetic algorithm (GA) was applied to optimize the established artificial neural network models. Two artificial neural network models were constructed: the back-propagation (BP) network and the radial basis function (RBF) network. The root mean square error, coefficient of determination, and standard error of prediction of the BP model were 0.0375, 0.959, and 8.49 %, respectively, whereas those of the RBF model were 0.0257, 0.980, and 5.82 %, respectively. These values indicated that the RBF model possessed higher fitness and prediction accuracy than the BP model. Under the optimized auto-induction medium, the predicted maximum G-CSF yield by the BP-GA approach was 71.66 %, whereas that by the RBF-GA approach was 75.17 %. These predicted values are in agreement with the experimental results, with 72.4 and 76.014 % for the BP-GA and RBF-GA models, respectively. These results suggest that RBF-GA is superior to BP-GA. The developed approach in this study may be helpful in modeling and optimizing other multivariable, non-linear, and time-variant bioprocesses.
Two Hybrid Algorithms for Multiple Sequence Alignment
NASA Astrophysics Data System (ADS)
Naznin, Farhana; Sarker, Ruhul; Essam, Daryl
2010-01-01
In order to design life saving drugs, such as cancer drugs, the design of Protein or DNA structures has to be accurate. These structures depend on Multiple Sequence Alignment (MSA). MSA is used to find the accurate structure of Protein and DNA sequences from existing approximately correct sequences. To overcome the overly greedy nature of the well known global progressive alignment method for multiple sequence alignment, we have proposed two different algorithms in this paper; one is using an iterative approach with a progressive alignment method (PAMIM) and the second one is using a genetic algorithm with a progressive alignment method (PAMGA). Both of our methods started with a "kmer" distance table to generate single guide-tree. In the iterative approach, we have introduced two new techniques: the first technique is to generate Guide-trees with randomly selected sequences and the second is of shuffling the sequences inside that tree. The output of the tree is a multiple sequence alignment which has been evaluated by the Sum of Pairs Method (SPM) considering the real value data from PAM250. In our second GA approach, these two techniques are used to generate an initial population and also two different approaches of genetic operators are implemented in crossovers and mutation. To test the performance of our two algorithms, we have compared these with the existing well known methods: T-Coffee, MUSCEL, MAFFT and Probcon, using BAliBase benchmarks. The experimental results show that the first algorithm works well for some situations, where other existing methods face difficulties in obtaining better solutions. The proposed second method works well compared to the existing methods for all situations and it shows better performance over the first one.
NASA Astrophysics Data System (ADS)
Hwang, Seho; Shin, Jehyun
2013-04-01
Shale gas evaluation process can be summarized as the selection of sweep spot intervals in the vertical borehole and determination of hydraulic fracturing zones in horizontal borehole. Brittleness index used in the selection of hydraulic fracturing interval is calculated from dynamic Young's modulus and Poisson's ratio of wireline logging and MWD/LWD data. Young's modulus and Poisson's ratio are calculated from the sonic and density log data, and therefore the MWD/LWD in the horizontal borehole should be included sonic log to estimate the dynamic elastic constants. This paper proposes a practical method to estimate the elastic moduli based on Passey's algorithm if we can't use the LWD sonic log in the horizontal borehole. To estimate the TOC (Total Organic Content) using the sonic-resistivity log, density-resistivity log, and neutron-resistivity log using Passey's algorithm we use the relationship between Delta log R values and core derived-LOM (Level of Maturity) data. Dynamic elastic constants in the horizontal well, i.e. in case of sweet spot zones, can be estimated using the relationships between P-wave velocity and elastic constants in the vertical well, and similarity between the calculated Delta log R values from sonic-resistivity log, density-resistivity log, and neutron-resistivity log, respectively. From two Passey's algorithms such as sonic-resistivity log, density-resistivity log relationships in the vertical well, we can derive the P-wave velocity equating the two Passey's algorithms based on the similarity. Then we can derive the dynamic elastic constants using the relationships between P-wave velocity and dynamic elastic constants. Finally we can estimate the brittleness index from the Young's modulus and Poisson's ratio. We expect that this practical method can be effectively applied if we can't use the LWD sonic logging data of the horizontal borehole.
High-quality eutectic-metal-bonded AlGaAs-GaAs thin films on Si substrates
NASA Astrophysics Data System (ADS)
Venkatasubramanian, R.; Timmons, M. L.; Humphreys, T. P.; Keyes, B. M.; Ahrenkiel, R. K.
1992-02-01
Device quality GaAs-AlGaAs thin films have been obtained on Si substrates, using a novel approach called eutectic-metal-bonding (EMB). This involves the lattice-matched growth of GaAs-AlGaAs thin films on Ge substrates, followed by bonding onto a Si wafer. The Ge substrates are selectively removed by a CF4/O2 plasma etch, leaving high-quality GaAs-AlGaAs thin films on Si substrates. A minority-carrier lifetime of 103 ns has been obtained in a EMB GaAs-AlGaAs double heterostructure on Si, which is nearly forty times higher than the state-of-the-art lifetime for heteroepitaxial GaAs on Si, and represents the largest reported minority-carrier lifetime for a freestanding GaAs thin film. In addition, a negligible residual elastic strain in the EMB GaAs-AlGaAs films has been determined from Raman spectroscopy measurements.
Cakar, Tarik; Koker, Rasit
2015-01-01
A particle swarm optimization algorithm (PSO) has been used to solve the single machine total weighted tardiness problem (SMTWT) with unequal release date. To find the best solutions three different solution approaches have been used. To prepare subhybrid solution system, genetic algorithms (GA) and simulated annealing (SA) have been used. In the subhybrid system (GA and SA), GA obtains a solution in any stage, that solution is taken by SA and used as an initial solution. When SA finds better solution than this solution, it stops working and gives this solution to GA again. After GA finishes working the obtained solution is given to PSO. PSO searches for better solution than this solution. Later it again sends the obtained solution to GA. Three different solution systems worked together. Neurohybrid system uses PSO as the main optimizer and SA and GA have been used as local search tools. For each stage, local optimizers are used to perform exploitation to the best particle. In addition to local search tools, neurodominance rule (NDR) has been used to improve performance of last solution of hybrid-PSO system. NDR checked sequential jobs according to total weighted tardiness factor. All system is named as neurohybrid-PSO solution system. PMID:26221134
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.
Adeeyo, Adeyemi Ojutalayo; Lateef, Agbaje; Gueguim-Kana, Evariste Bosco
2016-01-01
Exopolysaccharide (EPS) production by a strain of Lentinus edodes was studied via the effects of treatments with ultraviolet (UV) irradiation and acridine orange. Furthermore, optimization of EPS production was studied using a genetic algorithm coupled with an artificial neural network in submerged fermentation. Exposure to irradiation and acridine orange resulted in improved EPS production (2.783 and 5.548 g/L, respectively) when compared with the wild strain (1.044 g/L), whereas optimization led to improved productivity (23.21 g/L). The EPS produced by various strains also demonstrated good DPPH scavenging activities of 45.40-88.90%, and also inhibited the growth of Escherichia coli and Klebsiella pneumoniae. This study shows that multistep optimization schemes involving physical-chemical mutation and media optimization can be an attractive strategy for improving the yield of bioactives from medicinal mushrooms. To the best of our knowledge, this report presents the first reference of a multistep approach to optimizing EPS production in L. edodes. PMID:27649726
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Bennett, Herbert S; Filliben, James J
2002-01-01
A critical issue identified in both the technology roadmap from the Optoelectronics Industry Development Association and the roadmaps from the National Electronics Manufacturing Initiative, Inc. is the need for predictive computer simulations of processes, devices, and circuits. The goal of this paper is to respond to this need by representing the extensive amounts of theoretical data for transport properties in the multi-dimensional space of mole fractions of AlAs in Ga1- x Al x As, dopant densities, and carrier densities in terms of closed form analytic expressions. Representing such data in terms of closed-form analytic expressions is a significant challenge that arises in developing computationally efficient simulations of microelectronic and optoelectronic devices. In this paper, we present a methodology to achieve the above goal for a class of numerical data in the bounded two-dimensional space of mole fraction of AlAs and dopant density. We then apply this methodology to obtain closed-form analytic expressions for the effective intrinsic carrier concentrations at 300 K in n-type and p-type Ga1- x Al x As as functions of the mole fraction x of AlAs between 0.0 and 0.3. In these calculations, the donor density N D for n-type material varies between 10(16) cm(-3) and 10(19) cm(-3) and the acceptor density N A for p-type materials varies between 10(16) cm(-3) and 10(20) cm(-3). We find that p-type Ga1- x Al x As presents much greater challenges for obtaining acceptable analytic fits whenever acceptor densities are sufficiently near the Mott transition because of increased scatter in the numerical computer results for solutions to the theoretical equations. The Mott transition region in p-type Ga1- x Al x As is of technological significance for mobile wireless communications systems. This methodology and its associated principles, strategies, regression analyses, and graphics are expected to be applicable to other problems beyond the specific case of effective
NASA Astrophysics Data System (ADS)
Kerkhoff, A.; Ling, H.
2009-12-01
We apply Pareto genetic algorithm (GA) optimization to the design of antenna elements for use in the Long Wavelength Array (LWA), a large, low-frequency radio telescope currently under development. By manipulating antenna geometry, the Pareto GA simultaneously optimizes the received Galactic background or “sky” noise level and radiation patterns of the antenna over all frequencies. Geometrical constraints are handled explicitly in the GA in order to guarantee the realizability, and to impart control over the monetary cost of the generated designs. The antenna elements considered are broadband planar dipoles arranged horizontally over the ground. It is demonstrated that the Pareto GA approach generates a set of designs, which exhibit a wide range of trade-offs between the two design objectives, and satisfy all constraints. Multiple GA executions are performed to determine how antenna performance trade-offs are affected by different geometrical constraint values, feed impedance values, radiating element shapes and orientations, and ground conditions. Two different planar dipole antenna designs are constructed, and antenna input impedance and sky noise drift scan measurements are performed to validate the results of the GA.
A Breeder Algorithm for Stellarator Optimization
NASA Astrophysics Data System (ADS)
Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.
2003-10-01
An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.
2014-01-01
This paper aims to present an experimental investigation for optimum tribological behavior (wear depth and coefficient of friction) of electroless Ni-P-Cu coatings based on four process parameters using artificial bee colony algorithm. Experiments are carried out by utilizing the combination of three coating process parameters, namely, nickel sulphate, sodium hypophosphite, and copper sulphate, and the fourth parameter is postdeposition heat treatment temperature. The design of experiment is based on the Taguchi L27 experimental design. After coating, measurement of wear and coefficient of friction of each heat-treated sample is done using a multitribotester apparatus with block-on-roller arrangement. Both friction and wear are found to increase with increase of source of nickel concentration and decrease with increase of source of copper concentration. Artificial bee colony algorithm is successfully employed to optimize the multiresponse objective function for both wear depth and coefficient of friction. It is found that, within the operating range, a lower value of nickel concentration, medium value of hypophosphite concentration, higher value of copper concentration, and higher value of heat treatment temperature are suitable for having minimum wear and coefficient of friction. The surface morphology, phase transformation behavior, and composition of coatings are also studied with the help of scanning electron microscopy, X-ray diffraction analysis, and energy dispersed X-ray analysis, respectively. PMID:27382630
A Moving Target Environment for Computer Configurations Using Genetic Algorithms
Crouse, Michael; Fulp, Errin W.
2011-10-31
Moving Target (MT) environments for computer systems provide security through diversity by changing various system properties that are explicitly defined in the computer configuration. Temporal diversity can be achieved by making periodic configuration changes; however in an infrastructure of multiple similarly purposed computers diversity must also be spatial, ensuring multiple computers do not simultaneously share the same configuration and potential vulnerabilities. Given the number of possible changes and their potential interdependencies discovering computer configurations that are secure, functional, and diverse is challenging. This paper describes how a Genetic Algorithm (GA) can be employed to find temporally and spatially diverse secure computer configurations. In the proposed approach a computer configuration is modeled as a chromosome, where an individual configuration setting is a trait or allele. The GA operates by combining multiple chromosomes (configurations) which are tested for feasibility and ranked based on performance which will be measured as resistance to attack. The result of successive iterations of the GA are secure configurations that are diverse due to the crossover and mutation processes. Simulations results will demonstrate this approach can provide at MT environment for a large infrastructure of similarly purposed computers by discovering temporally and spatially diverse secure configurations.
An Evaluation of Potentials of Genetic Algorithm in Shortest Path Problem
NASA Astrophysics Data System (ADS)
Hassany Pazooky, S.; Rahmatollahi Namin, Sh; Soleymani, A.; Samadzadegan, F.
2009-04-01
One of the most typical issues considered in combinatorial systems in transportation networks, is the shortest path problem. In such networks, routing has a significant impact on the network's performance. Due to natural complexity in transportation networks and strong impact of routing in different fields of decision making, such as traffic management and vehicle routing problem (VRP), appropriate solutions to solve this problem are crucial to be determined. During last years, in order to solve the shortest path problem, different solutions are proposed. These techniques are divided into two categories of classic and evolutionary approaches. Two well-known classic algorithms are Dijkstra and A*. Dijkstra is known as a robust, but time consuming algorithm in finding the shortest path problem. A* is also another algorithm very similar to Dijkstra, less robust but with a higher performance. On the other hand, Genetic algorithms are introduced as most applicable evolutionary algorithms. Genetic Algorithm uses a parallel search method in several parts of the domain and is not trapped in local optimums. In this paper, the potentiality of Genetic algorithm for finding the shortest path is evaluated by making a comparison between this algorithm and classic algorithms (Dijkstra and A*). Evaluation of the potential of these techniques on a transportation network in an urban area shows that due to the problem of classic methods in their small search space, GA had a better performance in finding the shortest path.
Study and development of tunable, single mode AlGaAs/GaAs lasers
Yu, P.K.L.; Liu, J.C. . Dept. of Electrical and Computer Engineering)
1990-09-01
Liquid phase epitaxy has been employed in this study to fabricate two-section wavelength tunable lasers. GaAs/AlGaAs and In GaAsP/InP material system have been used for fabricating the lasers. Both direct (butt) coupling and evanescent coupling approaches have been studied. The complications associated with the regrowth process have been responsible for poor laser performance. Some DBR gratings for three-section lasers have been made using the electron beam lithography at UCSD. A simple set up has been tested to measure the wavelength shift of GaAs/AlGaAs lasers. Also, a simple structure which avoids the regrowth process has been proposed for the two-section laser. 9 refs., 14 figs.
Guiding rational reservoir flood operation using penalty-type genetic algorithm
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu
2008-06-01
SummaryReal-time flood control of a multi-purpose reservoir should consider decreasing the flood peak stage downstream and storing floodwaters for future usage during typhoon seasons. This study proposes a reservoir flood control optimization model with linguistic description of requirements and existing regulations for rational operating decisions. The approach involves formulating reservoir flood operation as an optimization problem and using the genetic algorithm (GA) as a search engine. The optimizing formulation is expressed not only by mathematical forms of objective function and constraints, but also by no analytic expression in terms of parameters. GA is used to search a global optimum of a mixture of mathematical and nonmathematical formulations. Due to the great number of constraints and flood control requirements, it is difficult to reach a solution without violating constraints. To tackle this bottleneck, the proper penalty strategy for each parameter is proposed to guide the GA searching process. The proposed approach is applied to the Shihmen reservoir in North Taiwan for finding the rational release and desired storage as a case study. The hourly historical data sets of 29 typhoon events that have hit the area in last thirty years are investigated bye the proposed method. To demonstrate the effectiveness of the proposed approach, the simplex method was performed. The results demonstrated that a penalty-type genetic algorithm could effectively provide rational hydrographs to reduce flood damage during the flood operation and to increase final storage for future usages.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911
NASA Astrophysics Data System (ADS)
Chapman, Alexander Lloyd
Recently, a sound source identification technique called CRAFT was developed as an advance in the state of the art in inverse noise problems. It addressed some limitations associated with nearfield acoustic holography and a few of the issues with inverse boundary element method. This work centers on two critical issues associated with the CRAFT algorithm. Although CRAFT employs the complete general solution associated with the Helmholtz equation, the approach taken to derive those equations results in computational inefficiency when implemented numerically. In this work, a mathematical approach to derivation of the basis equations results in a doubling in efficiency. This formulation of CRAFT is termed general Helmholtz equation, least-squares method (GEN-HELS). Additionally, the numerous singular points present in the gradient of the basis functions are shown here to resolve to finite limits. As a realistic test case, a diesel engine surface pressure and velocity are reconstructed to show the increase in efficiency from CRAFT to GEN-HELS. Keywords: Inverse Numerical Acoustics, Acoustic Holography, Helmholtz Equation, HELS Method, CRAFT Algorithm.
NASA Astrophysics Data System (ADS)
Wang, L. S.; Tripathy, S.; Chua, S. J.; Vicknesh, S.; Lin, V. K. X.; Zang, K. Y.; Arokiaraj, J.; Tan, J. N.; Ramam, A.
2008-12-01
We report growth of InGaN/GaN multi-quantum-wells (MQWs) structures and GaN layers on silicon-on-insulator (SOI) substrates by metalorganic chemical vapor deposition (MOCVD). The growth conditions were tuned to realize blue-green emission peaks centered around 420-495 nm from such MQWs on SOI. X-ray diffraction, atomic force microscopy, scanning electron microscopy, and photoluminescence techniques were used to characterize the MQWs. Using a combination of selective dry etching techniques, GaN micromechanical structures are demonstrated on silicon-on-insulator (SOI) substrates. The dry releasing technique employs a controlled gas phase pulse etching with non-plasma xenon difluoride (XeF2), which selectively etches the Si overlayer of SOI, thus undercutting the GaN material on top. The mechanical properties of these released microstructures are characterized by micro-Raman spectroscopy. Such approach to realize multi-color light-emitting InGaN/GaN MQW structures and GaN micromechanical structures on SOI substrates is suitable for the integration of InGaN/GaN-based optoelectronic structures on SOI-based micro-opto-electromechanical systems (MOEMS) and sensors.
Pruning Neural Networks with Distribution Estimation Algorithms
Cantu-Paz, E
2003-01-15
This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.
AlGaAs ridge laser with 33% wall-plug efficiency at 100 °C based on a design of experiments approach
NASA Astrophysics Data System (ADS)
Fecioru, Alin; Boohan, Niall; Justice, John; Gocalinska, Agnieszka; Pelucchi, Emanuele; Gubbins, Mark A.; Mooney, Marcus B.; Corbett, Brian
2016-04-01
Upcoming applications for semiconductor lasers present limited thermal dissipation routes demanding the highest efficiency devices at high operating temperatures. This paper reports on a comprehensive design of experiment optimisation for the epitaxial layer structure of AlGaAs based 840 nm lasers for operation at high temperature (100 °C) using Technology Computer-Aided Design software. The waveguide thickness, Al content, doping level, and quantum well thickness were optimised. The resultant design was grown and the fabricated ridge waveguides were optimised for carrier injection and, at 100 °C, the lasers achieve a total power output of 28 mW at a current of 50 mA, a total slope efficiency 0.82 W A-1 with a corresponding wall-plug efficiency of 33%.
Hybrid UV Imager Containing Face-Up AlGaN/GaN Photodiodes
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Pain, Bedabrata
2005-01-01
A proposed hybrid ultraviolet (UV) image sensor would comprise a planar membrane array of face-up AlGaN/GaN photodiodes integrated with a complementary metal oxide/semiconductor (CMOS) readout-circuit chip. Each pixel in the hybrid image sensor would contain a UV photodiode on the AlGaN/GaN membrane, metal oxide/semiconductor field-effect transistor (MOSFET) readout circuitry on the CMOS chip underneath the photodiode, and a metal via connection between the photodiode and the readout circuitry (see figure). The proposed sensor design would offer all the advantages of comparable prior CMOS active-pixel sensors and AlGaN UV detectors while overcoming some of the limitations of prior (AlGaN/sapphire)/CMOS hybrid image sensors that have been designed and fabricated according to the methodology of flip-chip integration. AlGaN is a nearly ideal UV-detector material because its bandgap is wide and adjustable and it offers the potential to attain extremely low dark current. Integration of AlGaN with CMOS is necessary because at present there are no practical means of realizing readout circuitry in the AlGaN/GaN material system, whereas the means of realizing readout circuitry in CMOS are well established. In one variant of the flip-chip approach to integration, an AlGaN chip on a sapphire substrate is inverted (flipped) and then bump-bonded to a CMOS readout circuit chip; this variant results in poor quantum efficiency. In another variant of the flip-chip approach, an AlGaN chip on a crystalline AlN substrate would be bonded to a CMOS readout circuit chip; this variant is expected to result in narrow spectral response, which would be undesirable in many applications. Two other major disadvantages of flip-chip integration are large pixel size (a consequence of the need to devote sufficient area to each bump bond) and severe restriction on the photodetector structure. The membrane array of AlGaN/GaN photodiodes and the CMOS readout circuit for the proposed image sensor would
NASA Astrophysics Data System (ADS)
Wu, Cheng-Hsien; Su, Yan-Kuin; Chang, Shoou-Jinn; Huang, Ying-Sheng; Hsu, Hung-Pin
2004-07-01
An InGaAs/GaAsP strain-compensated layer has been proposed as a base material for GaAs-based double heterojunction bipolar transistors (DHBTs). As known, decreasing bandgap energy of the base layer in heterojunction bipolar transistors (HBTs) can result in a smaller turn-on voltage. Using InGaAs as a base material is one possible approach to achieve the aim. However, compressive strain induced by InGaAs diminishes the influence of indium-adding-induced bandgap energy reduction, and thus abates the advantage of turn-on voltage reduction. In this study, a 280 Å GaAs0.81P0.19 layer has been inserted below the In0.054Ga0.946As base layer to compensate the compressive strain induced by the InGaAs base layer. The result shows that the utilization of an InGaAs/GaAsP strain-compensated layer results in a reduction of the turn-on voltage by 20 mV. A turn-on voltage reduction of 190 mV over a conventional HBT with a GaAs base layer is achieved by utilizing the In0.054Ga0.946As/GaAs0.81P0.19 strain-compensated base layer. This particular DHBT has a small offset voltage of 55 mV and a knee voltage of 0.6 V. A peak current gain of 58.98, a unity-current-gain cut-off frequency fT of 22 GHz and a unilateral power gain cut-off frequency fMAX of 25 GHz are also achieved for this particular DHBT.
GA-optimization for rapid prototype system demonstration
NASA Technical Reports Server (NTRS)
Kim, Jinwoo; Zeigler, Bernard P.
1994-01-01
An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
Optimization of solar air collector using genetic algorithm and artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Şencan Şahin, Arzu
2012-11-01
Thermal performance of solar air collector depends on many parameters as inlet air temperature, air velocity, collector slope and properties related to collector. In this study, the effect of the different parameters which affect the performance of the solar air collector are investigated. In order to maximize the thermal performance of a solar air collector genetic algorithm (GA) and artificial bee colony algorithm (ABC) have been used. The results obtained indicate that GA and ABC algorithms can be applied successfully for the optimization of the thermal performance of solar air collector.
Use of genetic algorithms for computer-aided diagnosis of breast cancers from image features
NASA Astrophysics Data System (ADS)
Floyd, Carey E., Jr.; Tourassi, Georgia D.; Baker, Jay A.
1996-04-01
In this investigation we explore genetic algorithms as a technique to train the weights in a feed forward neural network designed to predict breast cancer based on mammographic findings and patient history. Mammograms were obtained from 206 patients who obtained breast biopsies. Mammographic findings were recorded by radiologists for each patient. In addition, the outcome of the biopsy was recorded. Of the 206 cases, 73 were malignant while 133 were benign at the time of biopsy. A genetic algorithm (GA) was developed to adjust the weights of an artificial neural network (ANN) so that the ANN would output the outcome of the biopsy when the mammographic findings were given as inputs. The GA is a technique for function optimization that reflects biological genetic evolution. The ANN was a fully connected feed- forward network using a sigmoid activation with 11 inputs, one hidden layer with 10 nodes, and one output node (benign/malignant). The GA approach allows much flexibility in selecting the function to be optimized. In this work both mean-squared error (MSE) and receiver operating characteristic (ROC) curve area (Az) were explored as optimization criteria. The system was trained using a bootstrap sampling. Optimizing for the two criteria result in different solutions. The 'best' solution was obtained by minimizing a linear combination of MSE and (1-Az). ROC areas were 0.82 plus or minus 0.07, somewhat less than those obtained using backpropagation for ANN training: 0.90 plus or minus 0.05. This is the first description of a genetic algorithm for breast cancer diagnosis. The novel advantage of this technique is the ability to optimize the system for maximizing ROC area rather than minimizing mean squared error. A new technique for computer-aided diagnosis of breast cancer has been explored. The flexibility of the GA approach allows optimization of cost functions that have relevance to breast cancer prediction.
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms.
Ortegon, Patricia; Poot-Hernández, Augusto C; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case. PMID:25973143
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms.
Ortegon, Patricia; Poot-Hernández, Augusto C; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case.
Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms
Ortegon, Patricia; Poot-Hernández, Augusto C.; Perez-Rueda, Ernesto; Rodriguez-Vazquez, Katya
2015-01-01
In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA). The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons) approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values) that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case. PMID:25973143
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)
2002-01-01
Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.
Genetic algorithms for modelling and optimisation
NASA Astrophysics Data System (ADS)
McCall, John
2005-12-01
Genetic algorithms (GAs) are a heuristic search and optimisation technique inspired by natural evolution. They have been successfully applied to a wide range of real-world problems of significant complexity. This paper is intended as an introduction to GAs aimed at immunologists and mathematicians interested in immunology. We describe how to construct a GA and the main strands of GA theory before speculatively identifying possible applications of GAs to the study of immunology. An illustrative example of using a GA for a medical optimal control problem is provided. The paper also includes a brief account of the related area of artificial immune systems.
NASA Astrophysics Data System (ADS)
Qian, Feng; Sun, Fan; Zhong, Weimin; Luo, Na
2013-09-01
An approach that combines genetic algorithm (GA) and control vector parameterization (CVP) is proposed to solve the dynamic optimization problems of chemical processes using numerical methods. In the new CVP method, control variables are approximated with polynomials based on state variables and time in the entire time interval. The iterative method, which reduces redundant expense and improves computing efficiency, is used with GA to reduce the width of the search region. Constrained dynamic optimization problems are even more difficult. A new method that embeds the information of infeasible chromosomes into the evaluation function is introduced in this study to solve dynamic optimization problems with or without constraint. The results demonstrated the feasibility and robustness of the proposed methods. The proposed algorithm can be regarded as a useful optimization tool, especially when gradient information is not available.
Novel GaAs surface phases via direct control of chemical potential
NASA Astrophysics Data System (ADS)
Zheng, C. X.; Tersoff, J.; Tang, W. X.; Morreau, A.; Jesson, D. E.
2016-05-01
Using in situ surface electron microscopy, we show that the surface chemical potential of GaAs (001), and hence the surface phase, can be systematically controlled by varying temperature with liquid Ga droplets present as Ga reservoirs. With decreasing temperature, the surface approaches equilibrium with liquid Ga. This provides access to a regime where we find phases ultrarich in Ga, extending the range of surface phases available in this technologically important system. The same behavior is expected to occur for similar binary or multicomponent semiconductors such as InGaAs.
Crystal growth of device quality GaAs in space
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1984-01-01
The crystal growth, device processing and device related properties and phenomena of GaAs are investigated. Our GaAs research evolves about these key thrust areas. The overall program combines: (1) studies of crystal growth on novel approaches to engineering of semiconductor materials (i.e., GaAs and related compounds); (2) investigation and correlation of materials properties and electronic characteristics on a macro- and microscale; (3) investigation of electronic properties and phenomena controlling device applications and device performance. The ground based program is developed which would insure successful experimentation with and eventually processing of GaAs in a near zero gravity environment.
Crossover Improvement for the Genetic Algorithm in Information Retrieval.
ERIC Educational Resources Information Center
Vrajitoru, Dana
1998-01-01
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
Selective Area Sublimation: A Simple Top-down Route for GaN-Based Nanowire Fabrication.
Damilano, B; Vézian, S; Brault, J; Alloing, B; Massies, J
2016-03-01
Post-growth in situ partial SiNx masking of GaN-based epitaxial layers grown in a molecular beam epitaxy reactor is used to get GaN selective area sublimation (SAS) by high temperature annealing. Using this top-down approach, nanowires (NWs) with nanometer scale diameter are obtained from GaN and InxGa1-xN/GaN quantum well epitaxial structures. After GaN regrowth on InxGa1-xN/GaN NWs resulting from SAS, InxGa1-xN quantum disks (QDisks) with nanometer sizes in the three dimensions are formed. Low temperature microphotoluminescence experiments demonstrate QDisk multilines photon emission around 3 eV with individual line widths of 1-2 meV.
Selective Area Sublimation: A Simple Top-down Route for GaN-Based Nanowire Fabrication.
Damilano, B; Vézian, S; Brault, J; Alloing, B; Massies, J
2016-03-01
Post-growth in situ partial SiNx masking of GaN-based epitaxial layers grown in a molecular beam epitaxy reactor is used to get GaN selective area sublimation (SAS) by high temperature annealing. Using this top-down approach, nanowires (NWs) with nanometer scale diameter are obtained from GaN and InxGa1-xN/GaN quantum well epitaxial structures. After GaN regrowth on InxGa1-xN/GaN NWs resulting from SAS, InxGa1-xN quantum disks (QDisks) with nanometer sizes in the three dimensions are formed. Low temperature microphotoluminescence experiments demonstrate QDisk multilines photon emission around 3 eV with individual line widths of 1-2 meV. PMID:26885770
NASA Astrophysics Data System (ADS)
Islam, Sirajul; Talukdar, Bipul
2016-08-01
A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.
NASA Astrophysics Data System (ADS)
Islam, Sirajul; Talukdar, Bipul
2016-09-01
A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.
Kim, Ye Kyun; Ahn, Cheol Hyoun; Yun, Myeong Gu; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun
2016-01-01
In this paper, a simple and controllable “wet pulse annealing” technique for the fabrication of flexible amorphous InGaZnO thin film transistors (a-IGZO TFTs) processed at low temperature (150 °C) by using scalable vacuum deposition is proposed. This method entailed the quick injection of water vapor for 0.1 s and purge treatment in dry ambient in one cycle; the supply content of water vapor was simply controlled by the number of pulse repetitions. The electrical transport characteristics revealed a remarkable performance of the a-IGZO TFTs prepared at the maximum process temperature of 150 °C (field-effect mobility of 13.3 cm2 V−1 s−1; Ion/Ioff ratio ≈ 108; reduced I-V hysteresis), comparable to that of a-IGZO TFTs annealed at 350 °C in dry ambient. Upon analysis of the angle-resolved x-ray photoelectron spectroscopy, the good performance was attributed to the effective suppression of the formation of hydroxide and oxygen-related defects. Finally, by using the wet pulse annealing process, we fabricated, on a plastic substrate, an ultrathin flexible a-IGZO TFT with good electrical and bending performances. PMID:27198067
Hauschild, Dirk; Handick, Evelyn; Göhl-Gusenleitner, Sina; Meyer, Frank; Schwab, Holger; Benkert, Andreas; Pohlner, Stephan; Palm, Jörg; Tougaard, Sven; Heske, Clemens; Weinhardt, Lothar; Reinert, Friedrich
2016-08-17
Using reflection electron energy loss spectroscopy (REELS), we have investigated the optical properties at the surface of a chalcopyrite-based Cu(In,Ga)(S,Se)2 (CIGSSe) thin-film solar cell absorber, as well as an indium sulfide (InxSy) buffer layer before and after annealing. By fitting the characteristic inelastic scattering cross-section λK(E) to cross sections evaluated by the QUEELS-ε(k,ω)-REELS software package, we determine the surface dielectric function and optical properties of these samples. A comparison of the optical values at the surface of the InxSy film with bulk ellipsometry measurements indicates a good agreement between bulk- and surface-related optical properties. In contrast, the properties of the CIGSSe surface differ significantly from the bulk. In particular, a larger (surface) band gap than for bulk-sensitive measurements is observed, providing a complementary and independent confirmation of earlier photoelectron spectroscopy results. Finally, we derive the inelastic mean free path λ for electrons in InxSy, annealed InxSy, and CIGSSe at a kinetic energy of 1000 eV. PMID:27463021
Jalali-Heravi, Mehdi; Kyani, Anahita
2007-05-01
This paper introduces the genetic algorithm-kernel partial least square (GA-KPLS), as a novel nonlinear feature selection method. This technique combines genetic algorithms (GAs) as powerful optimization methods with KPLS as a robust nonlinear statistical method for variable selection. This feature selection method is combined with artificial neural network to develop a nonlinear QSAR model for predicting activities of a series of substituted aromatic sulfonamides as carbonic anhydrase II (CA II) inhibitors. Eight simple one- and two-dimensional descriptors were selected by GA-KPLS and considered as inputs for developing artificial neural networks (ANNs). These parameters represent the role of acceptor-donor pair, hydrogen bonding, hydrosolubility and lipophilicity of the active sites and also the size of the inhibitors on inhibitor-isozyme interaction. The accuracy of 8-4-1 networks was illustrated by validation techniques of leave-one-out (LOO) and leave-multiple-out (LMO) cross-validations and Y-randomization. Superiority of this method (GA-KPLS-ANN) over the linear one (MLR) in a previous work and also the GA-PLS-ANN in which a linear feature selection method has been used indicates that the GA-KPLS approach is a powerful method for the variable selection in nonlinear systems. PMID:17316919
Optimization of a genetic algorithm for searching molecular conformer space
NASA Astrophysics Data System (ADS)
Brain, Zoe E.; Addicoat, Matthew A.
2011-11-01
We present two sets of tunings that are broadly applicable to conformer searches of isolated molecules using a genetic algorithm (GA). In order to find the most efficient tunings for the GA, a second GA - a meta-genetic algorithm - was used to tune the first genetic algorithm to reliably find the already known a priori correct answer with minimum computational resources. It is shown that these tunings are appropriate for a variety of molecules with different characteristics, and most importantly that the tunings are independent of the underlying model chemistry but that the tunings for rigid and relaxed surfaces differ slightly. It is shown that for the problem of molecular conformational search, the most efficient GA actually reduces to an evolutionary algorithm.
Global path planning of mobile robots using a memetic algorithm
NASA Astrophysics Data System (ADS)
Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen
2015-08-01
In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.
Systematic investigation on topological properties of layered GaS and GaSe under strain
An, Wei; Tian, Guang-Shan; Wu, Feng; Jiang, Hong; Li, Xin-Zheng
2014-08-28
The topological properties of layered β-GaS and ε-GaSe under strain are systematically investigated by ab initio calculations with the electronic exchange-correlation interactions treated beyond the generalized gradient approximation (GGA). Based on the GW method and the Tran-Blaha modified Becke-Johnson potential approach, we find that while ε-GaSe can be strain-engineered to become a topological insulator, β-GaS remains a trivial one even under strong strain, which is different from the prediction based on GGA. The reliability of the fixed volume assumption rooted in nearly all the previous calculations is discussed. By comparing to strain calculations with optimized inter-layer distance, we find that the fixed volume assumption is qualitatively valid for β-GaS and ε-GaSe, but there are quantitative differences between the results from the fixed volume treatment and those from more realistic treatments. This work indicates that it is risky to use theoretical approaches like GGA that suffer from the band gap problem to address physical properties, including, in particular, the topological nature of band structures, for which the band gap plays a crucial role. In the latter case, careful calibration against more reliable methods like the GW approach is strongly recommended.
The beam properties of high-power InGaAs/AlGaAs quantum well lasers
NASA Astrophysics Data System (ADS)
Wu, Xiang; Lu, Zukang; Wang, You; Takiguchi, Yoshihiro; Kan, Hirofumi
2003-11-01
The vertical beam quality factor of the fundamental TE propagating mode for InGaAs/AlGaAs SCH DQW lasers emitting at 940 nm is investigated by using the transfer matrix method and the non-paraxial vectorial moment theory for non-paraxial beams. An experimental approach is given for the measurement of the equivalent vertical beam quality factor of an InGaAs/AlGaAs SCH DQW laser. It has been shown that the vertical beam quality factor Mx2 is always larger than unity, whether the thickness of the active region of LDs is much smaller than the emission wavelength or not.
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-08-24
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased.
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-01-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased. PMID:27556534
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-01-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased. PMID:27556534
NASA Astrophysics Data System (ADS)
Park, Ji-Hyeon; Mandal, Arjun; Kang, San; Chatterjee, Uddipta; Kim, Jin Soo; Park, Byung-Guon; Kim, Moon-Deock; Jeong, Kwang-Un; Lee, Cheul-Ro
2016-08-01
This article demonstrates for the first time to the best of our knowledge, the merits of InGaN/GaN multiple quantum wells (MQWs) grown on hollow n-GaN nanowires (NWs) as a plausible alternative for stable photoelectrochemical water splitting and efficient hydrogen generation. These hollow nanowires are achieved by a growth method rather not by conventional etching process. Therefore this approach becomes simplistic yet most effective. We believe relatively low Ga flux during the selective area growth (SAG) aids the hollow nanowire to grow. To compare the optoelectronic properties, simultaneously solid nanowires are also studied. In this present communication, we exhibit that lower thermal conductivity of hollow n-GaN NWs affects the material quality of InGaN/GaN MQWs by limiting In diffusion. As a result of this improvement in material quality and structural properties, photocurrent and photosensitivity are enhanced compared to the structures grown on solid n-GaN NWs. An incident photon-to-current efficiency (IPCE) of around ~33.3% is recorded at 365 nm wavelength for hollow NWs. We believe that multiple reflections of incident light inside the hollow n-GaN NWs assists in producing a larger amount of electron hole pairs in the active region. As a result the rate of hydrogen generation is also increased.
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
Effect of GaAs substrate orientation on the growth kinetic of GaN layer grown by MOVPE
NASA Astrophysics Data System (ADS)
Laifi, J.; Chaaben, N.; Bouazizi, H.; Fourati, N.; Zerrouki, C.; El Gmili, Y.; Bchetnia, A.; Salvestrini, J. P.; El Jani, B.
2016-06-01
We have investigated the kinetic growth of low temperature GaN nucleation layers (LT-GaN) grown on GaAs substrates with different crystalline orientations. GaN nucleation layers were grown by metal organic vapor phase epitaxy (MOVPE) in a temperature range of 500-600 °C on oriented (001), (113), (112) and (111) GaAs substrates. The growth was in-situ monitored by laser reflectometry (LR). Using an optical model, including time-dependent surface roughness and growth rate profiles, simulations were performed to best approach the experimental reflectivity curves. Results are discussed and correlated with ex-situ analyses, such as atomic force microscopy (AFM) and UV-visible reflectance (SR). We show that the GaN nucleation layers growth results the formation of GaN islands whose density and size vary greatly with both growth temperature and substrate orientation. Arrhenius plots of the growth rate for each substrate give values of activation energy varying from 0.20 eV for the (001) orientation to 0.35 eV for the (113) orientation. Using cathodoluminescence (CL), we also show that high temperature (800-900 °C) GaN layers grown on top of the low temperature (550 °C) GaN nucleation layers, grown themselves on the GaAs substrates with different orientations, exhibit cubic or hexagonal phase depending on both growth temperature and substrate orientation.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Astrophysics Data System (ADS)
Robin, C.; Pillet, N.; Peña Arteaga, D.; Berger, J.-F.
2016-02-01
Background: Although self-consistent multiconfiguration methods have been used for decades to address the description of atomic and molecular many-body systems, only a few trials have been made in the context of nuclear structure. Purpose: This work aims at the development of such an approach to describe in a unified way various types of correlations in nuclei in a self-consistent manner where the mean-field is improved as correlations are introduced. The goal is to reconcile the usually set-apart shell-model and self-consistent mean-field methods. Method: This approach is referred to as "variational multiparticle-multihole configuration mixing method." It is based on a double variational principle which yields a set of two coupled equations that determine at the same time the expansion coefficients of the many-body wave function and the single-particle states. The solution of this problem is obtained by building a doubly iterative numerical algorithm. Results: The formalism is derived and discussed in a general context, starting from a three-body Hamiltonian. Links to existing many-body techniques such as the formalism of Green's functions are established. First applications are done using the two-body D1S Gogny effective force. The numerical procedure is tested on the 12C nucleus to study the convergence features of the algorithm in different contexts. Ground-state properties as well as single-particle quantities are analyzed, and the description of the first 2+ state is examined. Conclusions: The self-consistent multiparticle-multihole configuration mixing method is fully applied for the first time to the description of a test nucleus. This study makes it possible to validate our numerical algorithm and leads to encouraging results. To test the method further, we will realize in the second article of this series a systematic description of more nuclei and observables obtained by applying the newly developed numerical procedure with the same Gogny force. As
NASA Astrophysics Data System (ADS)
Krishna, Hemanth; Kumar, Hemantha; Gangadharan, Kalluvalappil
2016-06-01
A magneto rheological (MR) fluid damper offers cost effective solution for semiactive vibration control in an automobile suspension. The performance of MR damper is significantly depends on the electromagnetic circuit incorporated into it. The force developed by MR fluid damper is highly influenced by the magnetic flux density induced in the fluid flow gap. In the present work, optimization of electromagnetic circuit of an MR damper is discussed in order to maximize the magnetic flux density. The optimization procedure was proposed by genetic algorithm and design of experiments techniques. The result shows that the fluid flow gap size less than 1.12 mm cause significant increase of magnetic flux density.
Algorithm Engineering - An Attempt at a Definition
NASA Astrophysics Data System (ADS)
Sanders, Peter
This paper defines algorithm engineering as a general methodology for algorithmic research. The main process in this methodology is a cycle consisting of algorithm design, analysis, implementation and experimental evaluation that resembles Popper’s scientific method. Important additional issues are realistic models, algorithm libraries, benchmarks with real-world problem instances, and a strong coupling to applications. Algorithm theory with its process of subsequent modelling, design, and analysis is not a competing approach to algorithmics but an important ingredient of algorithm engineering.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
Genetic algorithm-based form error evaluation
NASA Astrophysics Data System (ADS)
Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng
2007-07-01
Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.
First Principles Electronic Structure of Mn doped GaAs, GaP, and GaN Semiconductors
Schulthess, Thomas C; Temmerman, Walter M; Szotek, Zdzislawa; Svane, Axel; Petit, Leon
2007-01-01
We present first-principles electronic structure calculations of Mn doped III-V semiconductors based on the local spin-density approximation (LSDA) as well as the self-interaction corrected local spin density method (SIC-LSD). We find that it is crucial to use a self-interaction free approach to properly describe the electronic ground state. The SIC-LSD calculations predict the proper electronic ground state configuration for Mn in GaAs, GaP, and GaN. Excellent quantitative agreement with experiment is found for magnetic moment and p-d exchange in (GaMn)As. These results allow us to validate commonly used models for magnetic semiconductors. Furthermore, we discuss the delicate problem of extracting binding energies of localized levels from density functional theory calculations. We propose three approaches to take into account final state effects to estimate the binding energies of the Mn-d levels in GaAs. We find good agreement between computed values and estimates from photoemisison experiments.
Carrier capture dynamics of single InGaAs/GaAs quantum-dot layers
Chauhan, K. N.; Riffe, D. M.; Everett, E. A.; Kim, D. J.; Yang, H.; Shen, F. K.
2013-05-28
Using 800 nm, 25-fs pulses from a mode locked Ti:Al{sub 2}O{sub 3} laser, we have measured the ultrafast optical reflectivity of MBE-grown, single-layer In{sub 0.4}Ga{sub 0.6}As/GaAs quantum-dot (QD) samples. The QDs are formed via two-stage Stranski-Krastanov growth: following initial InGaAs deposition at a relatively low temperature, self assembly of the QDs occurs during a subsequent higher temperature anneal. The capture times for free carriers excited in the surrounding GaAs (barrier layer) are as short as 140 fs, indicating capture efficiencies for the InGaAs quantum layer approaching 1. The capture rates are positively correlated with initial InGaAs thickness and annealing temperature. With increasing excited carrier density, the capture rate decreases; this slowing of the dynamics is attributed to Pauli state blocking within the InGaAs quantum layer.
Electro-optic imagery of high-voltage GaAs photoconductive switches
Falk, R.A.; Adams, J.C.; Capps, C.D.; Ferrier, S.G.; Krinsky, J.A. )
1995-01-01
The authors present electro-optic images of GaAs high-voltage photoconductive switches utilizing the electro-optic effect of the semi-insulating GaAs substrate. Experimental methodology for obtaining the images is described along with a self-calibrating data reduction algorithm. Use of the technique for observing fabrication defects is shown.
Sanmann, Jennifer N; Schaefer, G Bradley; Buehler, Bruce A; Sanger, Warren G
2012-03-01
Methyl-CpG binding protein 2 gene (MECP2) testing is indicated for patients with numerous clinical presentations, including Rett syndrome (classic and atypical), unexplained neonatal encephalopathy, Angelman syndrome, nonspecific mental retardation, autism (females), and an X-linked family history of developmental delay. Because of this complexity, a gender-specific approach for comprehensive MECP2 gene testing is described. Briefly, sequencing of exons 1 to 4 of MECP2 is recommended for patients with a Rett syndrome phenotype, unexplained neonatal encephalopathy, an Angelman syndrome phenotype (with negative 15q11-13 analysis), nonspecific mental retardation, or autism (females). Additional testing for large-scale MECP2 deletions is recommended for patients with Rett syndrome or Angelman syndrome phenotypes (with negative 15q11-13 analysis) following negative sequencing. Alternatively, testing for large-scale MECP2 duplications is recommended for males presenting with mental retardation, an X-linked family history of developmental delay, and a significant proportion of previously described clinical features (particularly a history of recurrent respiratory infections).
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2016-10-01
In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.
Satellite remote sensing offers synoptic and frequent monitoring of optical water quality parameters, such as chlorophyll-a, turbidity, and colored dissolved organic matter (CDOM). While traditional satellite algorithms were developed for the open ocean, these algorithms often do...
A genetic-based neuro-fuzzy approach for modeling and control of dynamical systems.
Farag, W A; Quintana, V H; Lambert-Torres, G
1998-01-01
Linguistic modeling of complex irregular systems constitutes the heart of many control and decision making systems, and fuzzy logic represents one of the most effective algorithms to build such linguistic models. In this paper, a linguistic (qualitative) modeling approach is proposed. The approach combines the merits of the fuzzy logic theory, neural networks, and genetic algorithms (GA's). The proposed model is presented in a fuzzy-neural network (FNN) form which can handle both quantitative (numerical) and qualitative (linguistic) knowledge. The learning algorithm of an FNN is composed of three phases. The first phase is used to find the initial membership functions of the fuzzy model. In the second phase, a new algorithm is developed and used to extract the linguistic-fuzzy rules. In the third phase, a multiresolutional dynamic genetic algorithm (MRD-GA) is proposed and used for optimized tuning of membership functions of the proposed model. Two well-known benchmarks are used to evaluate the performance of the proposed modeling approach, and compare it with other modeling approaches. PMID:18255764
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Mitin, Vladimir; Choi, Jae Kyu; Sablon, Kimberly; Sergeev, Andrei
2016-05-01
We designed, fabricated, and characterized multi-color IR photodetectors with asymmetrical doping of GaAs/AlGaAs double quantum wells (DQW). We measured and analyzed spectral and noise characteristics to evaluate feasibility of these photodetectors for remote temperature sensing at liquid nitrogen temperatures. The bias voltage controls the charge distribution between the two wells in a DQW unit and provides effective tuning of IR induced electron transitions. We have found that the responsivity of our devices is symmetrical and weakly dependent on the bias voltage because the doping asymmetry compensates the effect of dopant migration in the growth direction. At the same time, the asymmetrical doping strongly enhances the selectivity and tunability of spectral characteristics by bias voltage. Multicolor detection of our QWIP is realized by varying the bias voltage. Maximum detection wavelength moves from 7.5 μm to 11.1 μm by switching applied bias from -5 V to 4 V. Modeling shows significant dependence of the photocurrent ratio on the object temperature regardless of its emissivity and geometrical factors. We also experimentally investigated the feasibility of our devices for remote temperature sensing by measuring the photocurrent as a response to blackbody radiation with the temperature from 300°C to 1000°C in the range of bias voltages from -5 V to 5 V. The agreement between modelling and experimental results demonstrates that our QWIP based on asymmetrically doped GaAs/AlGaAs DQW nanomaterial is capable of remote temperature sensing. By optimizing the physical design and varying the doping level of quantum wells, we can generalize this approach to higher temperature measurements. In addition, continuous variation of bias voltage provides fast collection of large amounts of photocurrent data at various biases and improves the accuracy of remote temperature measurements via appropriate algorithm of signal processing.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Electric field driven plasmon dispersion in AlGaN/GaN high electron mobility transistors
NASA Astrophysics Data System (ADS)
Tan, Ren-Bing; Qin, Hua; Zhang, Xiao-Yu; Xu, Wen
2013-11-01
We present a theoretical study on the electric field driven plasmon dispersion of the two-dimensional electron gas (2DEG) in AlGaN/GaN high electron mobility transistors (HEMTs). By introducing a drifted Fermi—Dirac distribution, we calculate the transport properties of the 2DEG in the AlGaN/GaN interface by employing the balance-equation approach based on the Boltzmann equation. Then, the nonequilibrium Fermi—Dirac function is obtained by applying the calculated electron drift velocity and electron temperature. Under random phase approximation (RPA), the electric field driven plasmon dispersion is investigated. The calculated results indicate that the plasmon frequency is dominated by both the electric field E and the angle between wavevector q and electric field E. Importantly, the plasmon frequency could be tuned by the applied source—drain bias voltage besides the gate voltage (change of the electron density).
NASA Astrophysics Data System (ADS)
Göktürkler, G.; Balkaya, Ç.
2012-10-01
Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms.
Investigation of range extension with a genetic algorithm
Austin, A. S., LLNL
1998-03-04
Range optimization is one of the tasks associated with the development of cost- effective, stand-off, air-to-surface munitions systems. The search for the optimal input parameters that will result in the maximum achievable range often employ conventional Monte Carlo techniques. Monte Carlo approaches can be time-consuming, costly, and insensitive to mutually dependent parameters and epistatic parameter effects. An alternative search and optimization technique is available in genetic algorithms. In the experiments discussed in this report, a simplified platform motion simulator was the fitness function for a genetic algorithm. The parameters to be optimized were the inputs to this motion generator and the simulator`s output (terminal range) was the fitness measure. The parameters of interest were initial launch altitude, initial launch speed, wing angle-of-attack, and engine ignition time. The parameter values the GA produced were validated by Monte Carlo investigations employing a full-scale six-degree-of-freedom (6 DOF) simulation. The best results produced by Monte Carlo processes using values based on the GA derived parameters were within - 1% of the ranges generated by the simplified model using the evolved parameter values. This report has five sections. Section 2 discusses the motivation for the range extension investigation and reviews the surrogate flight model developed as a fitness function for the genetic algorithm tool. Section 3 details the representation and implementation of the task within the genetic algorithm framework. Section 4 discusses the results. Section 5 concludes the report with a summary and suggestions for further research.
Yang, Yujue; Zeng, Yiping
2015-01-21
InGaN-based light-emitting diodes (LEDs) with some specific designs on the quantum barrier layers by alternating InGaN barriers with GaN barriers are proposed and studied numerically. In the proposed structure, simulation results show that the carriers are widely dispersed in the multi-quantum well active region, and the radiative recombination rate is efficiently improved and the electron leakage is suppressed accordingly, due to the appropriate band engineering. The internal quantum efficiency and light-output power are thus markedly enhanced and the efficiency droop is smaller, compared to the original structures with GaN barriers or InGaN barriers. Moreover, the gradually decrease of indium composition in the alternating quantum barriers can further promote the LED performance because of the more uniform carrier distribution, which provides us a simple but highly effective approach for high-performance LED applications.
Zhang, Zi-Hui; Tan, Swee Tiam; Liu, Wei; Ju, Zhengang; Zheng, Ke; Kyaw, Zabu; Ji, Yun; Hasanov, Namig; Sun, Xiao Wei; Demir, Hilmi Volkan
2013-02-25
This work reports both experimental and theoretical studies on the InGaN/GaN light-emitting diodes (LEDs) with optical output power and external quantum efficiency (EQE) levels substantially enhanced by incorporating p-GaN/n-GaN/p-GaN/n-GaN/p-GaN (PNPNP-GaN) current spreading layers in p-GaN. Each thin n-GaN layer sandwiched in the PNPNP-GaN structure is completely depleted due to the built-in electric field in the PNPNP-GaN junctions, and the ionized donors in these n-GaN layers serve as the hole spreaders. As a result, the electrical performance of the proposed device is improved and the optical output power and EQE are enhanced.
High nitrogen pressure solution growth of GaN
NASA Astrophysics Data System (ADS)
Bockowski, Michal
2014-10-01
Results of GaN growth from gallium solution under high nitrogen pressure are presented. Basic of the high nitrogen pressure solution (HNPS) growth method is described. A new approach of seeded growth, multi-feed seed (MFS) configuration, is demonstrated. The use of two kinds of seeds: free-standing hydride vapor phase epitaxy GaN (HVPE-GaN) obtained from metal organic chemical vapor deposition (MOCVD)-GaN/sapphire templates and free-standing HVPE-GaN obtained from the ammonothermally grown GaN crystals, is shown. Depending on the seeds’ structural quality, the differences in the structural properties of pressure grown material are demonstrated and analyzed. The role and influence of impurities, like oxygen and magnesium, on GaN crystals grown from gallium solution in the MFS configuration is presented. The properties of differently doped GaN crystals are discussed. An application of the pressure grown GaN crystals as substrates for electronic and optoelectronic devices is reported.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
GaN Based Electronics And Their Applications
NASA Astrophysics Data System (ADS)
Ren, Fan
2002-03-01
The Group III-nitrides were initially researched for their promise to fill the void for a blue solid state light emitter. Electronic devices from III-nitrides have been a more recent phenomenon. The thermal conductivity of GaN is three times that of GaAs. For high power or high temperature applications, good thermal conductivity is imperative for heat removal or sustained operation at elevated temperatures. The development of III-N and other wide bandgap technologies for high temperature applications will likely take place at the expense of competing technologies, such as silicon-on-insulator (SOI), at moderate temperatures. At higher temperatures (>300°C), novel devices and components will become possible. The automotive industry will likely be one of the largest markets for such high temperature electronics. One of the most noteworthy advantages for III-N materials over other wide bandgap semiconductors is the availability of AlGaN/GaN and InGaN/GaN heterostructures. A 2-dimensional electron gas (2DEG) has been shown to exist at the AlGaN/GaN interface, and heterostructure field effect transistors (HFETs) from these materials can exhibit 2DEG mobilities approaching 2000 cm2 / V?s at 300K. Power handling capabilities of 12 W/mm appear feasible, and extraordinary large signal performance has already been demonstrated, with a current state-of-the-art of >10W/mm at X-band. In this talk, high speed and high temperature AlGaN/GaN HEMTs as well as MOSHEMTs, high breakdown voltage GaN (>6KV) and AlGaN (9.7 KV) Schottky diodes, and their applications will be presented.
Theoretical studies of optical gain tuning by hydrostatic pressure in GaInNAs/GaAs quantum wells
Gladysiewicz, M.; Wartak, M. S.; Kudrawiec, R.
2014-01-21
In order to describe theoretically the tuning of the optical gain by hydrostatic pressure in GaInNAs/GaAs quantum wells (QWs), the optical gain calculations within kp approach were developed and applied for N-containing and N-free QWs. The electronic band structure and the optical gain for GaInNAs/GaAs QW were calculated within the 10-band kp model which takes into account the interaction of electron levels in the QW with the nitrogen resonant level in GaInNAs. It has been shown that this interaction increases with the hydrostatic pressure and as a result the optical gain for GaInNAs/GaAs QW decreases by about 40% and 80% for transverse electric and transverse magnetic modes, respectively, for the hydrostatic pressure change from 0 to 40 kilobars. Such an effect is not observed for N-free QWs where the dispersion of electron and hole energies remains unchanged with the hydrostatic pressure. This is due to the fact that the conduction and valence band potentials in GaInAs/GaAs QW scale linearly with the hydrostatic pressure.
Research and experiment of InGaAs shortwave infrared imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Ren, Ling; Min, Chaobo; Sun, Jianning; Gu, Yan; Yang, Feng; Zhu, Bo; Pan, Jingsheng; Guo, Yiliang
2015-04-01
The design and imaging characteristic experiment of InGaAs shortwave infrared imaging system are introduced. Through the adoption of InGaAs focal plane array, the real time image process structure of InGaAs shortwave infrared imaging system is researched. The hardware circuit and image process software of the imaging system based on FPGA are researched. The InGaAs shortwave infrared imaging system is composed of shortwave infrared lens, InGaAs focal plane array, temperature controller module, power supply module, analog-to-digital converter module, digital-to-analog converter module, FPGA image processing module and optical-mechanical structure. The main lock frequency of InGaAs shortwave infrared imaging system is 30MHz. The output mode of the InGaAs shortwave infrared imaging system is PAL analog signal. The power dissipation of the imaging system is 2.6W. The real time signal process in InGaAs shortwave infrared imaging system includes non-uniformly correction algorithm, bad pixel replacement algorithm, and histogram equalization algorithm. Based on the InGaAs shortwave infrared imaging system, the imaging characteristic test of shortwave infrared is carried out for different targets in different conditions. In the foggy weather, the haze and fog penetration are tested. The InGaAs shortwave infrared imaging system could be used for observing humans, boats, architecture, and mountains in the haze and foggy weather. The configuration and performance of InGaAs shortwave infrared imaging system are respectively logical and steady. The research on the InGaAs shortwave infrared imaging system is worthwhile for improving the development of night vision technology.
Magnetic field-dependent of binding energy in GaN/InGaN/GaN spherical QDQW nanoparticles
NASA Astrophysics Data System (ADS)
El Ghazi, Haddou; Jorio, Anouar; Zorkani, Izeddine
2013-10-01
Simultaneous study of magnetic field and impurity's position effects on the ground-state shallow-donor binding energy in GaN│InGaN│GaN (core│well│shell) spherical quantum dot-quantum well (SQDQW) as a function of the ratio of the inner and the outer radius is reported. The calculations are investigated within the framework of the effective-mass approximation and an infinite deep potential describing the quantum confinement effect. A Ritz variational approach is used taking into account of the electron-impurity correlation and the magnetic field effect in the trial wave-function. It appears that the binding energy depends strongly on the external magnetic field, the impurity's position and the structure radius. It has been found that: (i) the magnetic field effect is more marked in large layer than in thin layer and (ii) it is more pronounced in the spherical layer center than in its extremities.
Treier, Katrin; Berg, Annette; Diederich, Patrick; Lang, Katharina; Osberghaus, Anna; Dismer, Florian; Hubbuch, Jürgen
2012-10-01
Compared to traditional strategies, application of high-throughput experiments combined with optimization methods can potentially speed up downstream process development and increase our understanding of processes. In contrast to the method of Design of Experiments in combination with response surface analysis (RSA), optimization approaches like genetic algorithms (GAs) can be applied to identify optimal parameter settings in multidimensional optimizations tasks. In this article the performance of a GA was investigated applying parameters applicable in high-throughput downstream process development. The influence of population size, the design of the initial generation and selection pressure on the optimization results was studied. To mimic typical experimental data, four mathematical functions were used for an in silico evaluation. The influence of GA parameters was minor on landscapes with only one optimum. On landscapes with several optima, parameters had a significant impact on GA performance and success in finding the global optimum. Premature convergence increased as the number of parameters and noise increased. RSA was shown to be comparable or superior for simple systems and low to moderate noise. For complex systems or high noise levels, RSA failed, while GA optimization represented a robust tool for process optimization. Finally, the effect of different objective functions is shown exemplarily for a refolding optimization of lysozyme.
NASA Astrophysics Data System (ADS)
Allen, Christopher T.; Young, George S.; Haupt, Sue Ellen
In homeland security applications, it is often necessary to characterize the source location and strength of a potentially harmful contaminant. Correct source characterization requires accurate meteorological data such as wind direction. Unfortunately, available meteorological data is often inaccurate or unrepresentative, having insufficient spatial and temporal resolution for precise modeling of pollutant dispersion. To address this issue, a method is presented that simultaneously determines the surface wind direction and the pollutant source characteristics. This method compares monitored receptor data to pollutant dispersion model output and uses a genetic algorithm (GA) to find the combination of source location, source strength, and surface wind direction that best matches the dispersion model output to the receptor data. A GA optimizes variables using principles from genetics and evolution. The approach is validated with an identical twin experiment using synthetic receptor data and a Gaussian plume equation as the dispersion model. Given sufficient receptor data, the GA is able to reproduce the wind direction, source location, and source strength. Additional runs incorporating white noise into the receptor data to simulate real-world variability demonstrate that the GA is still capable of computing the correct solution, as long as the magnitude of the noise does not exceed that of the receptor data.
Genetic-algorithm selection of a regulatory structure that directs flux in a simple metabolic model.
Gilman, A; Ross, J
1995-01-01
A genetic algorithm (GA) is used to optimize parameters for allosteric regulation of enzymes in a model of a metabolic futile cycle, in which two metabolites are interconverted by a pair of irreversible enzymatic reactions. The cycle is regulated by end products of the surrounding pathway. The optimization criterion for the GA is the proper direction of chemical flux in the regulated cycle toward one or the other end product in response to a simple, time-dependent model of biochemical "need" based on externally imposed variation of the end product concentrations. An energetic cost, to be held to a minimum, is also imposed on the operation of the cycle. The best-performing individuals selected by the GA are found to switch rapidly the direction of net flux according to need. In different "environments" (specific time courses of end product concentrations), the GA produces better- or poorer-performing individuals. In some cases "generalists" and "specialists" are produced. The present approach provides, purely as a consequence of formally specifying the task of flux direction, the new result of numerical confirmation, in a simple model, of the intuition that negative feedback and reciprocal regulation are important for good flux direction in arbitrary environments, and gives rise to a diversity of structures, suggestive of the results of biological evolution. Images FIGURE 3 FIGURE 6 PMID:8534802
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Alien Genetic Algorithm for Exploration of Search Space
NASA Astrophysics Data System (ADS)
Patel, Narendra; Padhiyar, Nitin
2010-10-01
Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.
Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige
Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
Ga nanoparticle-enhanced photoluminescence of GaAs
Kang, M.; Al-Heji, A. A.; Jeon, S.; Wu, J. H.; Lee, J.-E.; Saucer, T. W.; Zhao, L.; Sih, V.; Katzenstein, A. L.; Sofferman, D. L.; Goldman, R. S.
2013-09-02
We have examined the influence of surface Ga nanoparticles (NPs) on the enhancement of GaAs photoluminescence (PL) efficiency. We have utilized off-normal focused-ion-beam irradiation of GaAs surfaces to fabricate close-packed Ga NP arrays. The enhancement in PL efficiency is inversely proportional to the Ga NP diameter. The maximum PL enhancement occurs for the Ga NP diameter predicted to maximize the incident electromagnetic (EM) field enhancement. The PL enhancement is driven by the surface plasmon resonance (SPR)-induced enhancement of the incident EM field which overwhelms the SPR-induced suppression of the light emission.
Resonant energy transfer between patterned InGaN/GaN quantum wells and CdSe/ZnS quantum dots.
Xu, Xingsheng; Wang, Huayong
2016-01-01
We explore an easy method for preparation of a hybrid device of a photonic crystal InGaN/GaN quantum well (QW) and colloidal quantum dots using conventional photolithography. It is demonstrated from electroluminescence spectra that Förster resonance energy transfer takes place efficiently between the photonic crystal InGaN/GaN QW and CdSe/ZnS colloidal quantum dots. From the photoluminescence decay of the InGaN/GaN QW, the largest Förster resonance energy transfer efficiency between the photonic crystal GaN quantum well and colloidal quantum dots is measured as 88% and the corresponding Förster-resonance-energy-transfer fraction reached 42%. An easy approach is explored to realize a highly efficient electrically driven colloidal quantum dot device using the Förster-resonance-energy-transfer mechanism.
Peeled film GaAs solar cell development
NASA Technical Reports Server (NTRS)
Wilt, D. M.; Thomas, R. D.; Bailey, S. G.; Brinker, D. J.; Deangelo, F. L.
1990-01-01
Thin-film, single-crystal gallium arsenide (GaAs) solar cells could exhibit a specific power approaching 700 W/kg including coverglass. A simple process has been described whereby epitaxial GaAs layers are peeled from a reusable substrate. This process takes advantage of the extreme selectivity of the etching rate of aluminum arsenide (AlAs) over GaAs in dilute hydrofluoric acid. The feasibility of using the peeled film technique to fabricate high-efficiency, low-mass GaAs solar cells is presently demonstrated. A peeled film GaAs solar cell was successfully produced. The device, although fractured and missing the aluminum gallium arsenide window and antireflective coating, had a Voc of 874 mV and a fill factor of 68 percent under AM0 illumination.
Chakraborty, Apurba; Biswas, Dhrubes
2015-02-23
Frequency dependent conductance measurement is carried out to observe the trapping effect in AlGaN/InGaN/GaN double heterostructure and compared that with conventional AlGaN/GaN single heterostructure. It is found that the AlGaN/InGaN/GaN diode structure does not show any trapping effect, whereas single heterostructure AlGaN/GaN diode suffers from two kinds of trap energy states in near depletion to higher negative voltage bias region. This conductance behaviour of AlGaN/InGaN/GaN heterostructure is owing to more Fermi energy level shift from trap energy states at AlGaN/InGaN junction compare to single AlGaN/GaN heterostructure and eliminates the trapping effects. Analysis yielded interface trap energy state in AlGaN/GaN is to be with time constant of (33.8–76.5) μs and trap density of (2.38–0.656) × 10{sup 12 }eV{sup −1} cm{sup −2} in −3.2 to −4.8 V bias region, whereas for AlGaN/InGaN/GaN structure no interface energy states are found and the extracted surface trap energy concentrations and time constants are (5.87–4.39) ×10{sup 10} eV{sup −1} cm{sup −2} and (17.8–11.3) μs, respectively, in bias range of −0.8–0.0 V.
Hot-electron real-space transfer and longitudinal transport in dual AlGaN/AlN/{AlGaN/GaN} channels
NASA Astrophysics Data System (ADS)
Šermukšnis, E.; Liberis, J.; Matulionis, A.; Avrutin, V.; Ferreyra, R.; Özgür, Ü.; Morkoç, H.
2015-03-01
Real-space transfer of hot electrons is studied in dual-channel GaN-based heterostructure operated at or near plasmon-optical phonon resonance in order to attain a high electron drift velocity at high current densities. For this study, pulsed electric field is applied in the channel plane of a nominally undoped Al0.3Ga0.7N/AlN/{Al0.15Ga0.85N/GaN} structure with a composite channel of Al0.15Ga0.85N/GaN, where the electrons with a sheet density of 1.4 × 1013 cm-2, estimated from the Hall effect measurements, are confined. The equilibrium electrons are situated predominantly in the Al0.15Ga0.85N layer as confirmed by capacitance-voltage experiment and Schrödinger-Poisson modelling. The main peak of the electron density per unit volume decreases as more electrons occupy the GaN layer at high electric fields. The associated decrease in the plasma frequency induces the plasmon-assisted decay of non-equilibrium optical phonons (hot phonons) confirmed by the decrease in the measured hot-phonon lifetime from 0.95 ps at low electric fields down below 200 fs at fields of E \\gt 4 kV cm-1 as the plasmon-optical phonon resonance is approached. The onset of real-space transfer is resolved from microwave noise measurements: this source of noise dominates for E \\gt 8 kV cm-1. In this range of fields, the longitudinal current exceeds the values measured for a mono channel reference Al0.3Ga0.7N/AlN/GaN structure. The results are explained in terms of the ultrafast decay of hot phonons and reduced alloy scattering caused by the real-space transfer in the composite channel.
Development of hybrid genetic algorithms for product line designs.
Balakrishnan, P V Sundar; Gupta, Rakesh; Jacob, Varghese S
2004-02-01
In this paper, we investigate the efficacy of artificial intelligence (AI) based meta-heuristic techniques namely genetic algorithms (GAs), for the product line design problem. This work extends previously developed methods for the single product design problem. We conduct a large scale simulation study to determine the effectiveness of such an AI based technique for providing good solutions and bench mark the performance of this against the current dominant approach of beam search (BS). We investigate the potential advantages of pursuing the avenue of developing hybrid models and then implement and study such hybrid models using two very distinct approaches: namely, seeding the initial GA population with the BS solution, and employing the BS solution as part of the GA operator's process. We go on to examine the impact of two alternate string representation formats on the quality of the solutions obtained by the above proposed techniques. We also explicitly investigate a critical managerial factor of attribute importance in terms of its impact on the solutions obtained by the alternate modeling procedures. The alternate techniques are then evaluated, using statistical analysis of variance, on a fairy large number of data sets, as to the quality of the solutions obtained with respect to the state-of-the-art benchmark and in terms of their ability to provide multiple, unique product line options.
1.58 {mu}m InGaAs quantum well laser on GaAs
Taangring, I.; Ni, H. Q.; Wu, B. P.; Wu, D. H.; Xiong, Y. H.; Huang, S. S.; Niu, Z. C.; Wang, S. M.; Lai, Z. H.; Larsson, A.
2007-11-26
We demonstrate the 1.58 {mu}m emission at room temperature from a metamorphic In{sub 0.6}Ga{sub 0.4}As quantum well laser grown on GaAs by molecular beam epitaxy. The large lattice mismatch was accommodated through growth of a linearly graded buffer layer to create a high quality virtual In{sub 0.32}Ga{sub 0.68}As substrate. Careful growth optimization ensured good optical and structural qualities. For a 1250x50 {mu}m{sup 2} broad area laser, a minimum threshold current density of 490 A/cm{sup 2} was achieved under pulsed operation. This result indicates that metamorphic InGaAs quantum wells can be an alternative approach for 1.55 {mu}m GaAs-based lasers.
From Competence to Efficiency: A Tale of GA Progress
NASA Technical Reports Server (NTRS)
Goldberg, David E.
1996-01-01
Genetic algorithms (GAs) - search procedures based on the mechanics of natural selection and genetics - have grown in popularity for the solution of difficult optimization problems. Concomitant with this growth has been a rising cacaphony of complaint asserting that too much time must be spent by the GA practitioner diddling with codes, operators, and GA parameters; and even then these GA cassandras continue, and the user is still unsure that the effort will meet with success. At the same time, there has been a rising interest in GA theory by a growing community - a theorocracy - of mathematicians and theoretical computer scientists, and these individuals have turned their efforts increasingly toward elegant abstract theorems and proofs that seem to the practitioner to offer little in the way of answers for GA design or practice. What both groups seem to have missed is the largely unheralded 1993 assembly of integrated, applicable theory and its experimental confirmation. This theory has done two key things. First, it has predicted that simple GAs are severely limited in the difficulty of problems they can solve, and these limitations have been confirmed experimentally. Second, it has shown the path to circumventing these limitations in nontraditional GA designs such as the fast messy GA. This talk surveys the history, methodology, and accomplishment of the 1993 applicable theory revolution. After arguing that these accomplishments open the door to universal GA competence, the paper shifts the discussion to the possibility of universal GA efficiency in the utilization of time and real estate through effective parallelization, temporal decomposition, hybridization, and relaxed function evaluation. The presentation concludes by suggesting that these research directions are quickly taking us to a golden age of adaptation.
NASA Astrophysics Data System (ADS)
Ehret, Uwe
2016-04-01
approximation error than for an unstructured data set such as white noise. Knowledge of this Pareto optimum can be useful for the design of sampling strategies. It is also interesting to analyze the spatio-temporal distribution of the most relevant nodes of the data set (those with the largest information gain): Homogeneously spaced nodes indicate a data set of constant predictability throughout its extent, or low complexity, while heterogeneously spaced nodes indicate shifting patterns of local predictability, which is an attribute of higher complex data sets (if 'complexity' is defined as 'high overall uncertainty about local uncertainty'). Interpolation of data sets The structogram can also be used for interpolation, i.e. estimation at nodes where no observations are available. The idea of structogram-based interpolation is that, just as for Kriging, the estimation is a weighted linear combination of the observations, but here the weights are not determined based on the Variogram and the intrinsic hypothesis, but on the relevance of the nodes: Highly relevant nodes are given higher weights than lesser relevant nodes. Testing many different data sets revealed that for 'smooth' data sets, where proximity means similarity, classical Kriging-based interpolation outperforms structogram-based approaches, while for intermittent data sets such as rainfall time-series, where proximity does not always mean similarity, structogram-based interpolation performs better. References Ramer, U.: An iterative procedure for the polygonal approximation of plane curves, Computer Graphics and Image Processing, 1, 244-256, http://dx.doi.org/10.1016/S0146-664X(72)80017-0, 1972. Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: The Canadian Cartographer. Bd. 10, Nr. 2, 1973, ISSN 0008-3127, S. 112-122, 1973.
The development of integrated chemical microsensors in GaAs
CASALNUOVO,STEPHEN A.; ASON,GREGORY CHARLES; HELLER,EDWIN J.; HIETALA,VINCENT M.; BACA,ALBERT G.; HIETALA,S.L.
1999-11-01
Monolithic, integrated acoustic wave chemical microsensors are being developed on gallium arsenide (GaAs) substrates. With this approach, arrays of microsensors and the high frequency electronic components needed to operate them reside on a single substrate, increasing the range of detectable analytes, reducing overall system size, minimizing systematic errors, and simplifying assembly and packaging. GaAs is employed because it is both piezoelectric, a property required to produce the acoustic wave devices, and a semiconductor with a mature microelectronics fabrication technology. Many aspects of integrated GaAs chemical sensors have been investigated, including: surface acoustic wave (SAW) sensors; monolithic SAW delay line oscillators; GaAs application specific integrated circuits (ASIC) for sensor operation; a hybrid sensor array utilizing these ASICS; and the fully monolithic, integrated SAW array. Details of the design, fabrication, and performance of these devices are discussed. In addition, the ability to produce heteroepitaxial layers of GaAs and aluminum gallium arsenide (AlGaAs) makes possible micromachined membrane sensors with improved sensitivity compared to conventional SAW sensors. Micromachining techniques for fabricating flexural plate wave (FPW) and thickness shear mode (TSM) microsensors on thin GaAs membranes are presented and GaAs FPW delay line and TSM resonator performance is described.
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Wang, Y; Guo, G D; Chen, L F
2013-01-01
Frediction of the three-dimensional structure of a protein from its amino acid sequence can be considered as a global optimization problem. In this paper, the Chaotic Artificial Bee Colony (CABC) algorithm was introduced and applied to 3D protein structure prediction. Based on the 3D off-lattice AB model, the CABC algorithm combines global search and local search of the Artificial Bee Colony (ABC) algorithm with the Chaotic search algorithm to avoid the problem of premature convergence and easily trapping the local optimum solution. The experiments carried out with the popular Fibonacci sequences demonstrate that the proposed algorithm provides an effective and high-performance method for protein structure prediction. PMID:25509864
Comparative investigation of InGaP/GaAs/GaAsBi and InGaP/GaAs heterojunction bipolar transistors
Wu, Yi-Chen; Tsai, Jung-Hui; Chiang, Te-Kuang; Wang, Fu-Min
2015-10-15
In this article the characteristics of In{sub 0.49}Ga{sub 0.51}P/GaAs/GaAs{sub 0.975}Bi{sub 0.025} and In{sub 0.49}Ga{sub 0.51}P/GaAs heterojunction bipolar transistor (HBTs) are demonstrated and compared by two-dimensional simulated analysis. As compared to the traditional InGaP/GaAs HBT, the studied InGaP/GaAs/GaAsBi HBT exhibits a higher collector current, a lower base-emitter (B–E) turn-on voltage, and a relatively lower collector-emitter offset voltage of only 7 mV. Because the more electrons stored in the base is further increased in the InGaP/GaAs/GaAsBi HBT, it introduces the collector current to increase and the B–E turn-on voltage to decrease for low input power applications. However, the current gain is slightly smaller than the traditional InGaP/GaAs HBT attributed to the increase of base current for the minority carriers stored in the GaAsBi base.
Bousquet, J; Anto, J M; Demoly, P; Schünemann, H J; Togias, A; Akdis, M; Auffray, C; Bachert, C; Bieber, T; Bousquet, P J; Carlsen, K H; Casale, T B; Cruz, A A; Keil, T; Lodrup Carlsen, K C; Maurer, M; Ohta, K; Papadopoulos, N G; Roman Rodriguez, M; Samolinski, B; Agache, I; Andrianarisoa, A; Ang, C S; Annesi-Maesano, I; Ballester, F; Baena-Cagnani, C E; Basagaña, X; Bateman, E D; Bel, E H; Bedbrook, A; Beghé, B; Beji, M; Ben Kheder, A; Benet, M; Bennoor, K S; Bergmann, K C; Berrissoul, F; Bindslev Jensen, C; Bleecker, E R; Bonini, S; Boner, A L; Boulet, L P; Brightling, C E; Brozek, J L; Bush, A; Busse, W W; Camargos, P A M; Canonica, G W; Carr, W; Cesario, A; Chen, Y Z; Chiriac, A M; Costa, D J; Cox, L; Custovic, A; Dahl, R; Darsow, U; Didi, T; Dolen, W K; Douagui, H; Dubakiene, R; El-Meziane, A; Fonseca, J A; Fokkens, W J; Fthenou, E; Gamkrelidze, A; Garcia-Aymerich, J; Gerth van Wijk, R; Gimeno-Santos, E; Guerra, S; Haahtela, T; Haddad, H; Hellings, P W; Hellquist-Dahl, B; Hohmann, C; Howarth, P; Hourihane, J O; Humbert, M; Jacquemin, B; Just, J; Kalayci, O; Kaliner, M A; Kauffmann, F; Kerkhof, M; Khayat, G; Koffi N'Goran, B; Kogevinas, M; Koppelman, G H; Kowalski, M L; Kull, I; Kuna, P; Larenas, D; Lavi, I; Le, L T; Lieberman, P; Lipworth, B; Mahboub, B; Makela, M J; Martin, F; Martinez, F D; Marshall, G D; Mazon, A; Melen, E; Meltzer, E O; Mihaltan, F; Mohammad, Y; Mohammadi, A; Momas, I; Morais-Almeida, M; Mullol, J; Muraro, A; Naclerio, R; Nafti, S; Namazova-Baranova, L; Nawijn, M C; Nyembue, T D; Oddie, S; O'Hehir, R E; Okamoto, Y; Orru, M P; Ozdemir, C; Ouedraogo, G S; Palkonen, S; Panzner, P; Passalacqua, G; Pawankar, R; Pigearias, B; Pin, I; Pinart, M; Pison, C; Popov, T A; Porta, D; Postma, D S; Price, D; Rabe, K F; Ratomaharo, J; Reitamo, S; Rezagui, D; Ring, J; Roberts, R; Roca, J; Rogala, B; Romano, A; Rosado-Pinto, J; Ryan, D; Sanchez-Borges, M; Scadding, G K; Sheikh, A; Simons, F E R; Siroux, V; Schmid-Grendelmeier, P D; Smit, H A; Sooronbaev, T; Stein, R T; Sterk, P J; Sunyer, J; Terreehorst, I; Toskala, E; Tremblay, Y; Valenta, R; Valeyre, D; Vandenplas, O; van Weel, C; Vassilaki, M; Varraso, R; Viegi, G; Wang, D Y; Wickman, M; Williams, D; Wöhrl, S; Wright, J; Yorgancioglu, A; Yusuf, O M; Zar, H J; Zernotti, M E; Zidarn, M; Zhong, N; Zuberbier, T
2012-01-01
Concepts of disease severity, activity, control and responsiveness to treatment are linked but different. Severity refers to the loss of function of the organs induced by the disease process or to the occurrence of severe acute exacerbations. Severity may vary over time and needs regular follow-up. Control is the degree to which therapy goals are currently met. These concepts have evolved over time for asthma in guidelines, task forces or consensus meetings. The aim of this paper is to generalize the approach of the uniform definition of severe asthma presented to WHO for chronic allergic and associated diseases (rhinitis, chronic rhinosinusitis, chronic urticaria and atopic dermatitis) in order to have a uniform definition of severity, control and risk, usable in most situations. It is based on the appropriate diagnosis, availability and accessibility of treatments, treatment responsiveness and associated factors such as comorbidities and risk factors. This uniform definition will allow a better definition of the phenotypes of severe allergic (and related) diseases for clinical practice, research (including epidemiology), public health purposes, education and the discovery of novel therapies.
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng
2015-11-01
The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.
CyGaMEs Selene Player Log Dataset: Gameplay Assessment, Flow Dimensions and Non-Gameplay Assessments
ERIC Educational Resources Information Center
Reese, Debbie Denise
2015-01-01
The "Selene: A Lunar Construction GaME" instructional video game is a robust research environment (institutional review board approved) for investigating learning, affect, and the CyGaMEs Metaphorics approach to instructional video game design, embedded assessment, and informatics analysis and reporting. CyGaMEs applies analogical…
Application of Genetic Algorithms in Seismic Tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet; Papazachos, Constantinos
2010-05-01
application of hybrid genetic algorithms in seismic tomography is examined and the efficiency of least squares and genetic methods as representative of the local and global optimization, respectively, is presented and evaluated. The robustness of both optimization methods has been tested and compared for the same source-receiver geometry and characteristics of the model structure (anomalies, etc.). A set of seismic refraction synthetic (noise free) data was used for modeling. Specifically, cross-well, down-hole and typical refraction studies using 24 geophones and 5 shoots were used to confirm the applicability of the genetic algorithms in seismic tomography. To solve the forward modeling and estimate the traveltimes, the revisited ray bending method was used supplemented by an approximate computation of the first Fresnel volume. The root mean square (rms) error as the misfit function was used and calculated for the entire random velocity model for each generation. After the end of each generation and based on the misfit of the individuals (velocity models), the selection, crossover and mutation (typical process steps of genetic algorithms) were selected continuing the evolution theory and coding the new generation. To optimize the computation time, since the whole procedure is quite time consuming, the Matlab Distributed Computing Environment (MDCE) was used in a multicore engine. During the tests, we noticed that the fast convergence that the algorithm initially exhibits (first 5 generations) is followed by progressively slower improvements of the reconstructed velocity models. Thus, to improve the final tomographic models, a hybrid genetic algorithm (GA) approach was adopted by combining the GAs with a local optimization method after several generations, on the basis of the convergence of the resulting models. This approach is shown to be efficient, as it directs the solution search towards a model region close to the global minimum solution.
A Bat Algorithm with Mutation for UCAV Path Planning
Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi
2012-01-01
Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518
Carrier spin relaxation in GaInNAsSb/GaNAsSb/GaAs quantum well
Asami, T.; Nosho, H.; Tackeuchi, A.; Li, L. H.; Harmand, J. C.; Lu, S. L.
2011-12-23
We have investigated the carrier spin relaxation in GaInNAsSb/GaNAsSb/GaAs quantum well (QW) by time-resolved photoluminescence (PL) measurement. The sample consists of an 8-nm-thick GaIn{sub 0.36}N{sub 0.006}AsSb{sub 0.015} well, 5-nm-thick GaN{sub 0.01}AsSb{sub 0.11} intermediate barriers and 100-nm-thick GaAs barriers grown by molecular beam epitaxy on a GaAs(100) substrate. The spin relaxation time and recombination lifetime at 10 K are measured to be 228 ps and 151 ps, respectively. As a reference, we have also obtained a spin relaxation time of 125 ps and a recombination lifetime of 63 ps for GaInNAs/GaNAs/GaAs QW. This result shows that crystal quality is slightly improved by adding Sb, although these short carrier lifetimes mainly originate from a nonradiative recombination. These spin relaxation times are longer than the 36 ps spin relaxation time of InGaAs/InP QWs and shorter than the 2 ns spin relaxation time of GaInNAs/GaAs QW.
Production of Engineered Fabrics Using Artificial Neural Network-Genetic Algorithm Hybrid Model
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Banerjee, Debamalya
2015-10-01
The process of fabric engineering which is generally practised in most of the textile mills is very complicated, repetitive, tedious and time consuming. To eliminate this trial and error approach, a new approach of fabric engineering has been attempted in this work. Data sets of construction parameters [comprising of ends per inch, picks per inch, warp count and weft count] and three fabric properties (namely drape coefficient, air permeability and thermal resistance) of 25 handloom cotton fabrics have been used. The weights and biases of three artificial neural network (ANN) models developed for the prediction of drape coefficient, air permeability and thermal resistance were used to formulate the fitness or objective function and constraints of the optimization problem. The optimization problem was solved using genetic algorithm (GA). In both the fabrics which were attempted for engineering, the target and simulated fabric properties were very close. The GA was able to search the optimum set of fabric construction parameters with reasonably good accuracy except in case of EPI. However, the overall result is encouraging and can be improved further by using larger data sets of handloom fabrics by hybrid ANN-GA model.
Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness
NASA Astrophysics Data System (ADS)
Julich, R. J.
2004-05-01
The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.
Dielectric function of InGaAs in the visible
NASA Technical Reports Server (NTRS)
Alterovitz, S. A.; Sieg, R. E.; Yao, H. D.; Snyder, P. G.; Woollam, J. A.; Pamulapati, J.; Bhattacharya, P. K.; Sekula-Moise, P. A.
1990-01-01
Measurements are reported of the dielectric function of thermodynamically stable In(x)Ga(1-x)As in the composition range 0.3 equal to or less than X = to or less than 0.7. The optically thick samples of InGaAs were made by molecular beam epitaxy (MBE) in the range 0.4 = to or less than X = to or less than 0.7 and by metal-organic chemical vapor deposition (MOCVD) for X = 0.3. The MBE made samples, usually 1 micron thick, were grown on semi-insulating InP and included a strain release structure. The MOCVD sample was grown on GaAs and was 2 microns thick. The dielectric functions were measured by variable angle spectroscopic ellipsometry in the range 1.55 to 4.4 eV. The data was analyzed assuming an optically thick InGaAs material with an oxide layer on top. The thickness of this layer was estimated by comparing the results for the InP lattice matched material, i.e., X = 0.53, with results published in the literature. The top oxide layer mathematically for X = 0.3 and X = 0.53 was removed to get the dielectric function of the bare InGaAs. In addition, the dielectric function of GaAs in vacuum, after a protective arsenic layer was removed. The dielectric functions for X = 0, 0.3, and 0.53 together with the X = 1 result from the literature to evaluate an algorithm for calculating the dielectric function of InGaAs for an arbitrary value of X(0 = to or less than X = to or less than 1) were used. Results of the dielectric function calculated using the algorithm were compared with experimental data.
Dielectric function of InGaAs in the visible
NASA Technical Reports Server (NTRS)
Alterovitz, S. A.; Yao, H. D.; Snyder, P. G.; Woolam, J. A.; Pamulapati, J.; Bhattacharya, P. K.; Sekula-Moise, P. A.; Sieg, R. E.
1990-01-01
Measurements are reported of the dielectric function of thermodynamically stable In(x)Ga(1-x)As in the composition range 0.3 equal to or less than X = to or less than 0.7. The optically thick samples of InGaAs were made by molecular beam epitaxy (MBE) in the range 0.4 = to or less than X = to or less than 0.7 and by metal-organic chemical vapor deposition (MOCVD) for X = 0.3. The MBE made samples, usually 1 micron thick, were grown on semi-insulating InP and included a strain release structure. The MOCVD sample was grown on GaAs and was 2 microns thick. The dielectric functions were measured by variable angle spectroscopic ellipsometry in the range 1.55 to 4.4 eV. The data was analyzed assuming an optically thick InGaAs material with an oxide layer on top. The thickness of this layer was estimated by comparing the results for the InP lattice matched material, i.e., X = 0.53, with results published in the literature. The top oxide layer mathematically for X = 0.3 and X = 0.53 was removed to get the dielectric function of the bare InGaAs. In addition, the dielectric function of GaAs in vacuum, after a protective arsenic layer was removed. The dielectric functions for X = 0, 0.3, and 0.53 together with the X = 1 result from the literature to evaluate an algorithm for calculating the dielectric function of InGaAs for an arbitrary value of X (0 = to or less than X = to or less than 1) were used. Results of the dielectric function calculated using the algorithm were compared with experimental data.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
The efficiency of UV LEDs based on GaN/AlGaN heterostructures
NASA Astrophysics Data System (ADS)
Evseenkov, A. S.; Tarasov, S. A.; Kurin, S. Yu; Usikov, A. S.; Papchenko, B. P.; Helava, H.; Makarov, Yu N.; Solomonov, A. V.
2015-12-01
The UV LED GaN/AlGaN heterostructures obtained by HVPE approach were investigated. It was shown that the peak wavelength of UV LEDs was in the range of 360-380 nm with FWHM of 10-13 nm. At operating current of 20 mA, the active region temperature Tj was 43°C, the output optical power and efficiency - 1.14 mW and 1.46%, respectively. It was shown that the use of HVPE method allowed to achieve a high degree of structural perfection of epitaxial structures.
Infrared transitions between hydrogenic states in GaInNAs/GaAs quantum wells
NASA Astrophysics Data System (ADS)
Al, E. B.; Ungan, F.; Yesilgul, U.; Kasapoglu, E.; Sari, H.; Sökmen, I.
2016-08-01
The effects of nitrogen and indium concentrations on the 1s, 2s, 2p0 and 2p±-like donor impurity energy states in a single Ga1‑xInxNyAs1‑y/GaAs quantum well (QW) are investigated by variational approach within the effective mass approximation. The results are presented as a function of the well width and the donor impurity position. It is found that the impurity binding and transition energies depend strongly on the indium concentration while depends weakly on the nitrogen concentration.
GaN nanowire arrays by a patterned metal-assisted chemical etching
NASA Astrophysics Data System (ADS)
Wang, K. C.; Yuan, G. D.; Wu, R. W.; Lu, H. X.; Liu, Z. Q.; Wei, T. B.; Wang, J. X.; Li, J. M.; Zhang, W. J.
2016-04-01
We developed an one-step and two-step metal-assisted chemical etching method to produce self-organized GaN nanowire arrays. In one-step approach, GaN nanowire arrays are synthesized uniformly on GaN thin film surface. However, in a two-step etching processes, GaN nanowires are formed only in metal uncovered regions, and GaN regions with metal-covering show nano-porous sidewalls. We propose that nanowires and porous nanostructures are tuned by sufficient and limited etch rate, respectively. PL spectra shows a red-shift of band edge emission in GaN nanostructures. The formation mechanism of nanowires was illustrated by two separated electrochemical reactions occur simultaneously. The function of metals and UV light was illustrated by the scheme of potential relationship between energy bands in Si, GaN and standard hydrogen electrode potential of solution and metals.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Self-induced GaN nanowire growth: surface density determination
NASA Astrophysics Data System (ADS)
Koryakin, A. A.; Repetun, L.; Sibirev, N. V.; Dubrovskii, V. G.
2016-08-01
A new numerical approach for the determination of the GaN nanowire surface density on an AlN/Si substrate as a function of the growth time and gallium flux is presented. Within this approach, the GaN island solid-like coalescence and island-nanowire transition are modeled by the Monte-Carlo method. We show the importance of taking into consideration the island coalescence for explaining that the maximum of GaN island surface density is several times larger than the maximum of GaN nanowire surface density. Also, we find that the nanowire surface density decreases with an increase of the gallium flux.
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak
2012-01-01
The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memory storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.