NASA Astrophysics Data System (ADS)
Ozbulut, O. E.; Silwal, B.
2014-04-01
This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.
NASA Astrophysics Data System (ADS)
Nezami, M.; Gholami, B.
2016-03-01
The active flutter control of supersonic sandwich panels with regular honeycomb interlayers under impact load excitation is studied using piezoelectric patches. A non-dominated sorting-based multi-objective evolutionary algorithm, called non-dominated sorting genetic algorithm II (NSGA-II) is suggested to find the optimal locations for different numbers of piezoelectric actuator/sensor pairs. Quasi-steady first order supersonic piston theory is employed to define aerodynamic loading and the p-method is applied to find the flutter bounds. Hamilton’s principle in conjunction with the generalized Fourier expansions and Galerkin method are used to develop the dynamical model of the structural systems in the state-space domain. The classical Runge-Kutta time integration algorithm is then used to calculate the open-loop aeroelastic response of the system. The maximum flutter velocity and minimum voltage applied to actuators are calculated according to the optimal locations of piezoelectric patches obtained using the NSGA-II and then the proportional feedback is used to actively suppress the closed loop system response. Finally the control effects, using the two different controllers, are compared.
NASA Astrophysics Data System (ADS)
Yadav, Ravindra Nath; Yadava, Vinod; Singh, G. K.
2013-09-01
The effective study of hybrid machining processes (HMPs), in terms of modeling and optimization has always been a challenge to the researchers. The combined approach of Artificial Neural Network (ANN) and Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) has attracted attention of researchers for modeling and optimization of the complex machining processes. In this paper, a hybrid machining process of Electrical Discharge Face Grinding (EDFG) and Diamond Face Grinding (DFG) named as Electrical Discharge Diamond face Grinding (EDDFG) have been studied using a hybrid methodology of ANN-NSGA-II. In this study, ANN has been used for modeling while NSGA-II is used to optimize the control parameters of the EDDFG process. For observations of input-output relations, the experiments were conducted on a self developed face grinding setup, which is attached with the ram of EDM machine. During experimentation, the wheel speed, pulse current, pulse on-time and duty factor are taken as input parameters while output parameters are material removal rate (MRR) and average surface roughness ( R a). The results have shown that the developed ANN model is capable to predict the output responses within the acceptable limit for a given set of input parameters. It has also been found that hybrid approach of ANN-NSGAII gives a set of optimal solutions for getting appropriate value of outputs with multiple objectives.
NASA Astrophysics Data System (ADS)
Gao, Zhongmei; Shao, Xinyu; Jiang, Ping; Wang, Chunming; Zhou, Qi; Cao, Longchao; Wang, Yilin
2016-06-01
An integrated multi-objective optimization approach combining Kriging model and non-dominated sorting genetic algorithm-II (NSGA-II) is proposed to predict and optimize weld geometry in hybrid fiber laser-arc welding on 316L stainless steel in this paper. A four-factor, five-level experiment using Taguchi L25 orthogonal array is conducted considering laser power ( P), welding current ( I), distance between laser and arc ( D) and traveling speed ( V). Kriging models are adopted to approximate the relationship between process parameters and weld geometry, namely depth of penetration (DP), bead width (BW) and bead reinforcement (BR). NSGA-II is used for multi-objective optimization taking the constructed Kriging models as objective functions and generates a set of optimal solutions with pareto-optimal front for outputs. Meanwhile, the main effects and the first-order interactions between process parameters are analyzed. Microstructure is also discussed. Verification experiments demonstrate that the optimum values obtained by the proposed integrated Kriging model and NSGA-II approach are in good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi
2014-07-01
Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.
Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II
Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh
2011-01-01
Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability. PMID:22606667
Wang, Ning-Ning; Dong, Jie; Deng, Yin-Hua; Zhu, Min-Feng; Wen, Ming; Yao, Zhi-Jiang; Lu, Ai-Ping; Wang, Jian-Bing; Cao, Dong-Sheng
2016-04-25
The Caco-2 cell monolayer model is a popular surrogate in predicting the in vitro human intestinal permeability of a drug due to its morphological and functional similarity with human enterocytes. A quantitative structure-property relationship (QSPR) study was carried out to predict Caco-2 cell permeability of a large data set consisting of 1272 compounds. Four different methods including multivariate linear regression (MLR), partial least-squares (PLS), support vector machine (SVM) regression and Boosting were employed to build prediction models with 30 molecular descriptors selected by nondominated sorting genetic algorithm-II (NSGA-II). The best Boosting model was obtained finally with R(2) = 0.97, RMSEF = 0.12, Q(2) = 0.83, RMSECV = 0.31 for the training set and RT(2) = 0.81, RMSET = 0.31 for the test set. A series of validation methods were used to assess the robustness and predictive ability of our model according to the OECD principles and then define its applicability domain. Compared with the reported QSAR/QSPR models about Caco-2 cell permeability, our model exhibits certain advantage in database size and prediction accuracy to some extent. Finally, we found that the polar volume, the hydrogen bond donor, the surface area and some other descriptors can influence the Caco-2 permeability to some extent. These results suggest that the proposed model is a good tool for predicting the permeability of drug candidates and to perform virtual screening in the early stage of drug development. PMID:27018227
NASA Astrophysics Data System (ADS)
Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad
2016-06-01
The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION
In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...
NASA Astrophysics Data System (ADS)
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
Multi-objective optimization of lithium-ion battery model using genetic algorithm approach
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Wang, Lixin; Hinds, Gareth; Lyu, Chao; Zheng, Jun; Li, Junfu
2014-12-01
A multi-objective parameter identification method for modeling of Li-ion battery performance is presented. Terminal voltage and surface temperature curves at 15 °C and 30 °C are used as four identification objectives. The Pareto fronts of two types of Li-ion battery are obtained using the modified multi-objective genetic algorithm NSGA-II and the final identification results are selected using the multiple criteria decision making method TOPSIS. The simulated data using the final identification results are in good agreement with experimental data under a range of operating conditions. The validation results demonstrate that the modified NSGA-II and TOPSIS algorithms can be used as robust and reliable tools for identifying parameters of multi-physics models for many types of Li-ion batteries.
Multi-objective evolutionary algorithm for operating parallel reservoir system
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu; Chang, Fi-John
2009-10-01
SummaryThis paper applies a multi-objective evolutionary algorithm, the non-dominated sorting genetic algorithm (NSGA-II), to examine the operations of a multi-reservoir system in Taiwan. The Feitsui and Shihmen reservoirs are the most important water supply reservoirs in Northern Taiwan supplying the domestic and industrial water supply needs for over 7 million residents. A daily operational simulation model is developed to guide the releases of the reservoir system and then to calculate the shortage indices (SI) of both reservoirs over a long-term simulation period. The NSGA-II is used to minimize the SI values through identification of optimal joint operating strategies. Based on a 49 year data set, we demonstrate that better operational strategies would reduce shortage indices for both reservoirs. The results indicate that the NSGA-II provides a promising approach. The pareto-front optimal solutions identified operational compromises for the two reservoirs that would be expected to improve joint operations.
Technology Transfer Automated Retrieval System (TEKTRAN)
Stream temperature is one of the most influential parameters impacting the survival, growth rates, distribution, and migration patterns of many aquatic organisms. Distributed stream temperature models are crucial for providing insights into variations of stream temperature for regions and time perio...
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization
NASA Astrophysics Data System (ADS)
Cao, Ruifen; Li, Guoli; Wu, Yican
Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
NASA Astrophysics Data System (ADS)
Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.
2014-07-01
The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.
NASA Astrophysics Data System (ADS)
Chen, Jing; Liu, Tundong; Jiang, Hao
2016-01-01
A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.
Multiple ant colony algorithm method for selecting tag SNPs.
Liao, Bo; Li, Xiong; Zhu, Wen; Li, Renfa; Wang, Shulin
2012-10-01
The search for the association between complex disease and single nucleotide polymorphisms (SNPs) or haplotypes has recently received great attention. Finding a set of tag SNPs for haplotyping in a great number of samples is an important step to reduce cost for association study. Therefore, it is essential to select tag SNPs with more efficient algorithms. In this paper, we model problem of selection tag SNPs by MINIMUM TEST SET and use multiple ant colony algorithm (MACA) to search a smaller set of tag SNPs for haplotyping. The various experimental results on various datasets show that the running time of our method is less than GTagger and MLR. And MACA can find the most representative SNPs for haplotyping, so that MACA is more stable and the number of tag SNPs is also smaller than other evolutionary methods (like GTagger and NSGA-II). Our software is available upon request to the corresponding author. PMID:22480582
NASA Astrophysics Data System (ADS)
Schütze, Niels; Wöhling, Thomas; de Play, Michael
2010-05-01
Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.
A non-dominated sorting genetic algorithm for a bi-objective pick-up and delivery problem
NASA Astrophysics Data System (ADS)
Velasco, N.; Dejax, P.; Guéret, C.; Prins, C.
2012-03-01
Some companies must transport their personnel within facilities. This is especially the case for oil companies that use helicopters to transport engineers, technicians and assistant personnel from platform to platform. This operation has the potential to become expensive if the transportation routes are not correctly planned and provide a bad quality of service. Here this issue is modelled as a pick-up and delivery problem where a set of transportation requests should be scheduled in routes, minimizing the total transportation cost while the most urgent requests are satisfied by priority. To solve the problem, a method based on a Non-dominated Sorting Genetic Algorithm (NSGA-II) is proposed. This algorithm is tested on both randomly generated and real instances provided by a petroleum company. The results show that the proposed algorithm improves the best-known solutions.
NASA Astrophysics Data System (ADS)
Li, Tao; Mallick, Subhashis
2015-02-01
Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm
Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando
2014-01-01
Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257
A hybrid multi-objective particle swarm algorithm for a mixed-model assembly line sequencing problem
NASA Astrophysics Data System (ADS)
Rahimi-Vahed, A. R.; Mirghorbani, S. M.; Rabbani, M.
2007-12-01
Mixed-model assembly line sequencing is one of the most important strategic problems in the field of production management where diversified customers' demands exist. In this article, three major goals are considered: (i) total utility work, (ii) total production rate variation and (iii) total setup cost. Due to the complexity of the problem, a hybrid multi-objective algorithm based on particle swarm optimization (PSO) and tabu search (TS) is devised to obtain the locally Pareto-optimal frontier where simultaneous minimization of the above-mentioned objectives is desired. In order to validate the performance of the proposed algorithm in terms of solution quality and diversity level, the algorithm is applied to various test problems and its reliability, based on different comparison metrics, is compared with three prominent multi-objective genetic algorithms, PS-NC GA, NSGA-II and SPEA-II. The computational results show that the proposed hybrid algorithm significantly outperforms existing genetic algorithms in large-sized problems.
Multi-objective Job Shop Rescheduling with Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Hao, Xinchang; Gen, Mitsuo
In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo
2016-11-01
The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187
NASA Astrophysics Data System (ADS)
An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu
2016-07-01
This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.
Zhang, Xuesong; Srinivasan, Raghavan; Van Liew, M.
2010-04-15
With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi-objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi-objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi-algorithm, genetically adaptive multi-objective method (AMALGAM) for multi-site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi-objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non-dominated Sorted Genetic Algorithm II (NSGA-II)). In order to provide insights into each method’s overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi-method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi-site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multiobjective optimization algorithms and multi-mode search operators into AMALGAM deserves further research.
Algorithmic Questions for Linear Algebraic Groups. Ii
NASA Astrophysics Data System (ADS)
Sarkisjan, R. A.
1982-04-01
It is proved that, given a linear algebraic group defined over an algebraic number field and satisfying certain conditions, there exists an algorithm which determines whether or not two double cosets of a special type coincide in its adele group, and which enumerates all such double cosets. This result is applied to the isomorphism problem for finitely generated nilpotent groups, and also to other problems.Bibliography: 18 titles.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
SAGE Version 7.0 Algorithm: Application to SAGE II
NASA Technical Reports Server (NTRS)
Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.
2013-01-01
This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.
Proposal of Functional-Specialization Multi-Objective Real-Coded Genetic Algorithm: FS-MOGA
NASA Astrophysics Data System (ADS)
Hamada, Naoki; Tanaka, Masaharu; Sakuma, Jun; Kobayashi, Shigenobu; Ono, Isao
This paper presents a Genetic Algorithm (GA) for multi-objective function optimization. To find a precise and widely-distributed set of solutions in difficult multi-objective function optimization problems which have multimodality and curved Pareto-optimal set, a GA would be required conflicting behaviors in the early stage and the last stage of search. That is, in the early stage of search, GA should perform local-Pareto-optima-overcoming search which aims to overcome local Pareto-optima and converge the population to promising areas in the decision variable space. On the other hand, in the last stage of search, GA should perform Pareto-frontier-covering search which aims to spread the population along the Pareto-optimal set. NSGA-II and SPEA2, the most widely used conventional methods, have problems in local-Pareto-optima-overcoming and Pareto-frontier-covering search. In local-Pareto-optima-overcoming search, their selection pressure is too high to maintain the diversity for overcoming local Pareto-optima. In Pareto-frontier-covering search, their abilities of extrapolation-directed sampling are not enough to spread the population and they cannot sample along the Pareto-optimal set properly. To resolve above problems, the proposed method adaptively switches two search strategies, each of which is specialized for local-Pareto-optima-overcoming and Pareto-frontier-covering search, respectively. We examine the effectiveness of the proposed method using two benchmark problems. The experimental results show that our approach outperforms the conventional methods in terms of both local-Pareto-optima-overcoming and Pareto-frontier-covering search.
Nios II hardware acceleration of the epsilon quadratic sieve algorithm
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Botella, Guillermo; Castillo, Encarnacion; García, Antonio
2010-04-01
The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA acceleration. Our new ɛ-QS algorithm further simplifies the hardware architecture making it an even better candidate for C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP) and QS architecture choices for factoring 32-2048 bit RSA numbers.
Tracking at CDF: algorithms and experience from Run I and Run II
Snider, F.D.; /Fermilab
2005-10-01
The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.
A TCAS-II Resolution Advisory Detection Algorithm
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James
2013-01-01
The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.
Iterative phase retrieval algorithms. Part II: Attacking optical encryption systems.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
The modified iterative phase retrieval algorithms developed in Part I [Guo et al., Appl. Opt.54, 4698 (2015)] are applied to perform known plaintext and ciphertext attacks on amplitude encoding and phase encoding Fourier-transform-based double random phase encryption (DRPE) systems. It is shown that the new algorithms can retrieve the two random phase keys (RPKs) perfectly. The performances of the algorithms are tested by using the retrieved RPKs to decrypt a set of different ciphertexts encrypted using the same RPKs. Significantly, it is also shown that the DRPE system is, under certain conditions, vulnerable to ciphertext-only attack, i.e., in some cases an attacker can decrypt DRPE data successfully when only the ciphertext is intercepted. PMID:26192505
Optimisation in radiotherapy. II: Programmed and inversion optimisation algorithms.
Ebert, M
1997-12-01
This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered--those associated with mathematical programming which employ specific search techniques, linear programming-type searches or artificial intelligence--and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. PMID:9503694
Incremental refinement of a multi-user-detection algorithm (II)
NASA Astrophysics Data System (ADS)
Vollmer, M.; Götze, J.
2003-05-01
Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.
Measurement of the inclusive jet cross section using the midpoint algorithm in Run II at CDF
Group, Robert Craig; /Florida U.
2006-12-01
A measurement is presented of the inclusive jet cross section using the Midpoint jet clustering algorithm in five different rapidity regions. This is the first analysis which measures the inclusive jet cross section using the Midpoint algorithm in the forward region of the detector. The measurement is based on more than 1 fb{sup -1} of integrated luminosity of Run II data taken by the CDF experiment at the Fermi National Accelerator Laboratory. The results are consistent with the predictions of perturbative quantum chromodynamics.
Beam size and position measurement based on logarithm processing algorithm in HLS II
NASA Astrophysics Data System (ADS)
Chao-Cai, Cheng; Bao-Gen, Sun; Yong-Liang, Yang; Ze-Ran, Zhou; Ping, Lu; Fang-Fang, Wu; Ji-Gang, Wang; Kai, Tang; Qing, Luo; Hao, Li; Jia-Jun, Zheng; Qing-Ming, Duan
2016-04-01
A logarithm processing algorithm to measure beam transverse size and position is proposed and preliminary experimental results in Hefei Light Source II (HLS II) are given. The algorithm is based on only 4 successive channels of 16 anode channels of multianode photomultiplier tube (MAPMT) R5900U-00-L16, which has typical rise time of 0.6 ns and effective area of 0.8×16 mm for a single anode channel. In the paper, we first elaborate the simulation results of the algorithm with and without channel inconsistency. Then we calibrate the channel inconsistency and verify the algorithm using a general current signal processor Libera Photon in a low-speed scheme. Finally we get turn-by-turn beam size and position and calculate the vertical tune in a high-speed scheme. The experimental results show that measured values fit well with simulation results after channel differences are calibrated, and the fractional part of the tune in vertical direction is 0.3628, which is very close to the nominal value 0.3621. Supported by National Natural Science Foundation of China (11005105, 11175173)
NASA Astrophysics Data System (ADS)
Watchareeruetai, Ukrit; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Kudo, Hiroaki; Ohnishi, Noboru
We propose a new multi-objective genetic programming (MOGP) for automatic construction of image feature extraction programs (FEPs). The proposed method was originated from a well known multi-objective evolutionary algorithm (MOEA), i.e., NSGA-II. The key differences are that redundancy-regulation mechanisms are applied in three main processes of the MOGP, i.e., population truncation, sampling, and offspring generation, to improve population diversity as well as convergence rate. Experimental results indicate that the proposed MOGP-based FEP construction system outperforms the two conventional MOEAs (i.e., NSGA-II and SPEA2) for a test problem. Moreover, we compared the programs constructed by the proposed MOGP with four human-designed object recognition programs. The results show that the constructed programs are better than two human-designed methods and are comparable with the other two human-designed methods for the test problem.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements
NASA Technical Reports Server (NTRS)
Laviola, Sante; Levizzani, Vincenzo
2014-01-01
The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL
NASA Astrophysics Data System (ADS)
Rodrigo, Deepal
2007-12-01
This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.
Hlihor, Raluca Maria; Diaconu, Mariana; Leon, Florin; Curteanu, Silvia; Tavares, Teresa; Gavrilescu, Maria
2015-05-25
We investigated the bioremoval of Cd(II) in batch mode, using dead and living biomass of Trichoderma viride. Kinetic studies revealed three distinct stages of the biosorption process. The pseudo-second order model and the Langmuir model described well the kinetics and equilibrium of the biosorption process, with a determination coefficient, R(2)>0.99. The value of the mean free energy of adsorption, E, is less than 16 kJ/mol at 25 °C, suggesting that, at low temperature, the dominant process involved in Cd(II) biosorption by dead T. viride is the chemical ion-exchange. With the temperature increasing to 40-50 °C, E values are above 16 kJ/mol, showing that the particle diffusion mechanism could play an important role in Cd(II) biosorption. The studies on T. viride growth in Cd(II) solutions and its bioaccumulation performance showed that the living biomass was able to bioaccumulate 100% Cd(II) from a 50 mg/L solution at pH 6.0. The influence of pH, biomass dosage, metal concentration, contact time and temperature on the bioremoval efficiency was evaluated to further assess the biosorption capability of the dead biosorbent. These complex influences were correlated by means of a modeling procedure consisting in data driven approach in which the principles of artificial intelligence were applied with the help of support vector machines (SVM), combined with genetic algorithms (GA). According to our data, the optimal working conditions for the removal of 98.91% Cd(II) by T. viride were found for an aqueous solution containing 26.11 mg/L Cd(II) as follows: pH 6.0, contact time of 3833 min, 8 g/L biosorbent, temperature 46.5 °C. The complete characterization of bioremoval parameters indicates that T. viride is an excellent material to treat wastewater containing low concentrations of metal. PMID:25224921
Geophysical inversion with a neighbourhood algorithm-II. Appraising the ensemble
NASA Astrophysics Data System (ADS)
Sambridge, Malcolm
1999-09-01
Monte Carlo direct search methods, such as genetic algorithms, simulated annealing, etc., are often used to explore a finite-dimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information' collected in the search process. Search techniques have been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble. Often inferences are based on only a small subset of the ensemble, and sometimes a single member. This paper presents a new approach to the appraisal problem. To our knowledge this is the first time the general case has been addressed, that is, how to infer information from a complete ensemble, previously generated by any search method. The essence of the new approach is to use the information in the available ensemble to guide a resampling of the parameter space. This requires no further solving of the forward problem, but from the new `resampled' ensemble we are able to obtain measures of resolution and trade-off in the model parameters, or any combinations of them. The new ensemble inference algorithm is illustrated on a highly non-linear wave-form inversion problem. It is shown how the computation time and memory requirements scale with the dimension of the parameter space and size of the ensemble. The method is highly parallel, and may easily be distributed across several computers. Since little is assumed about the initial ensemble of earth models, the technique is applicable to a wide variety of situations. For example, it may be applied to perform `error analysis' using the ensemble generated by a genetic algorithm, or any other direct search method.
Berkolaiko, G.; Kuipers, J.
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
2004-01-01
The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi
Noise characterization of block-iterative reconstruction algorithms: II. Monte Carlo simulations.
Soares, Edward J; Glick, Stephen J; Hoppin, John W
2005-01-01
In Soares et al. (2000), the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART) were derived. Included in this analysis were the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. Explicit expressions were found for the ensemble mean, covariance matrix, and probability density function of RBI reconstructed images, as a function of iteration number. The theoretical formulations relied on one approximation, namely that the noise in the reconstructed image was small compared to the mean image. In this paper, we evaluate the predictions of the theory by using Monte Carlo methods to calculate the sample statistical properties of each algorithm and then compare the results with the theoretical formulations. In addition, the validity of the approximation will be justified. PMID:15638190
Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson
2012-09-01
This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.
Graph Theoretic Foundations of Multibody Dynamics Part II: Analysis and Algorithms.
Jain, Abhinandan
2011-10-01
This second, of a two part paper, uses concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The first part [7] established the block-weighted adjacency (BWA) matrix structure of spatial operators associated with serial and tree topology multibody system dynamics, and introduced the notions of spatial kernel operators (SKO) and spatial propagation operators (SPO). This paper builds upon these connections to show that key analytical results and computational algorithms are a direct consequence of these structural properties and require minimal assumptions about the specific nature of the underlying multibody system. We formalize this notion by introducing the notion of SKO models for general tree-topology multibody systems. We show that key analytical results, including mass matrix factorization, inversion, and decomposition hold for all SKO models. It is also shown that key low-order scatter/gather recursive computational algorithms follow directly from these abstract-level analytical results. Application examples to illustrate the concrete application of these general results are provided. The paper also describes a general recipe for developing SKO models. The abstract nature of SKO models allows the application of these techniques to a very broad class of multibody systems. PMID:22102791
NASA Astrophysics Data System (ADS)
Sabik, Simon
We measure the top quark mass using approximately 359 pb-1 of data from pp¯ collisions at s = 1.96 GeV at CDF Run II. We select tt¯ candidates that are consistent with two W bosons decaying to a charged lepton and a neutrino following tt¯ → W+W-bb¯ → l+l- nn¯ bb¯. Only one of the two charged leptons is required to be identified as an electron or a muon candidate, while the other is simply a well measured track. We use a neutrino weighting algorithm which weighs each possibility of neutrino direction to reconstruct a top quark mass in each event. We compare the resulting distribution to Monte Carlo templates to obtain a top quark mass of 170.8+6.9-6.5 (stat) +/- 4.6 (syst) GeV/c 2.
NASA Astrophysics Data System (ADS)
Shahriari, Mohammadreza
2016-03-01
The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.
Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.
Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A
1989-01-01
Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments. PMID:2613721
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. PMID:27107954
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management. PMID:25622333
Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee
2015-07-01
The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system
The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations
Sako, Masao; Bassett, Bruce; Becker, Andrew; Cinabro, David; DeJongh, Don Frederic; Depoy, D.L.; Doi, Mamoru; Garnavich, Peter M.; Craig, Hogan, J.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Marriner, John; Miknaitis, Gajus; Nichol, Robert C.; Prieto, Jose Luis; Richmond, Michael W.; Schneider, Donald P.; Smith, Mathew; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo U. /Apache Point Observ. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Tokyo U. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ.
2007-09-14
The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.
NASA Astrophysics Data System (ADS)
Wang, Jiong; Steinmann, Paul
2016-05-01
This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.
NASA Astrophysics Data System (ADS)
Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian
2016-01-01
Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.
Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas
2009-03-01
The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area. PMID:18773238
NASA Astrophysics Data System (ADS)
Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas
2009-03-01
The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area.
NASA Astrophysics Data System (ADS)
Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.
1999-11-01
For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.
Developement of a same-side kaon tagging algorithm of B^0_s decays for measuring delta m_s at CDF II
Menzemer, Stephanie; /Heidelberg U.
2006-06-01
The authors developed a Same-Side Kaon Tagging algorithm to determine the production flavor of B{sub s}{sup 0} mesons. Until the B{sub s}{sup 0} mixing frequency is clearly observed the performance of the Same-Side Kaon Tagging algorithm can not be measured on data but has to be determined on Monte Carlo simulation. Data and Monte Carlo agreement has been evaluated for both the B{sub s}{sup 0} and the high statistics B{sup +} and B{sup 0} modes. Extensive systematic studies were performed to quantify potential discrepancies between data and Monte Carlo. The final optimized tagging algorithm exploits the particle identification capability of the CDF II detector. it achieves a tagging performance of {epsilon}D{sup 2} = 4.0{sub -1.2}{sup +0.9} on the B{sub s}{sup 0} {yields} D{sub s}{sup -} {pi}{sup +} sample. The Same-Side Kaon Tagging algorithm presented here has been applied to the ongoing B{sub s}{sup 0} mixing analysis, and has provided a factor of 3-4 increase in the effective statistical size of the sample. This improvement results in the first direct measurement of the B{sub s}{sup 0} mixing frequency.
NASA Astrophysics Data System (ADS)
Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin
2016-04-01
Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters (i.e., laser power (P), welding current (A), distance between laser and arc (D), and welding speed (V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.
NASA Astrophysics Data System (ADS)
Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin
2016-08-01
Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters ( i.e., laser power ( P), welding current ( A), distance between laser and arc ( D), and welding speed ( V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.
Technology Transfer Automated Retrieval System (TEKTRAN)
Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.
Stankovski, Z.
1995-12-31
The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.
Yang, Wei; Chen, Jin; Mausushita, Bunki
2009-01-01
In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model. PMID:19385201
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Han, Juhong; Wang, You; Cai, He; An, Guofei; Zhang, Wei; Xue, Liangping; Wang, Hongyuan; Zhou, Jie; Jiang, Zhigang; Gao, Ming
2015-04-01
With high efficiency and small thermally-induced effects in the near-infrared wavelength region, a diode-pumped alkali laser (DPAL) is regarded as combining the major advantages of solid-state lasers and gas-state lasers and obviating their main disadvantages at the same time. Studying the temperature distribution in the cross-section of an alkali-vapor cell is critical to realize high-powered DPAL systems for both static and flowing states. In this report, a theoretical algorithm has been built to investigate the features of a flowing-gas DPAL system by uniting procedures in kinetics, heat transfer, and fluid dynamic together. The thermal features and output characteristics have been simultaneously obtained for different gas velocities. The results have demonstrated the great potential of DPALs in the extremely high-powered laser operation. PMID:25968778
NASA Astrophysics Data System (ADS)
Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea
2014-05-01
Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.
An efficient hybrid approach for multiobjective optimization of water distribution systems
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2014-05-01
An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.
Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests
NASA Astrophysics Data System (ADS)
Roux, E.; Garcia, X.
2014-04-01
Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.
NASA Astrophysics Data System (ADS)
Volk, M.; Lautenbach, S.; Strauch, M.; Whittaker, G. W.
2012-04-01
Worldwide increasing bioenergy production is on the political agenda. It is well known that bioenergy production comes at a cost - several trade-offs with food production, water quality and quantity issues, biodiversity and ecosystem services are known. However, a quantification of these trade-offs is still missing. Hence, our study presents an analysis of trade-offs between water availability, water quality, bioenergy production and production in a Central German agricultural catchment. Our analysis is based on using SWAT and a multi-objective genetic algorithm (NSGA II). The genetic algorithm is used to find Pareto optimal configurations of crop rotation schemes. The Pareto-optimality describes solutions in which an objective cannot be improved without decreasing other objectives. This allows us to quantify the costs at which several levels of increase in bioenergy production come and to derive recommendations for policy makers.
Time-response-based evolutionary optimization
NASA Astrophysics Data System (ADS)
Avigad, Gideon; Goldvard, Alex; Salomon, Shaul
2015-04-01
Solutions to engineering problems are often evaluated by considering their time responses; thus, each solution is associated with a function. To avoid optimizing the functions, such optimization is usually carried out by setting auxiliary objectives (e.g. minimal overshoot). Therefore, in order to find different optimal solutions, alternative auxiliary optimization objectives may have to be defined prior to optimization. In the current study, a new approach is suggested that avoids the need to define auxiliary objectives. An algorithm is suggested that enables the optimization of solutions according to their transient behaviours. For this optimization, the functions are sampled and the problem is posed as a multi-objective problem. The recently introduced algorithm NSGA-II-PSA is adopted and tailored to solve it. Mathematical as well as engineering problems are utilized to explain and demonstrate the approach and its applicability to real life problems. The results highlight the advantages of avoiding the definition of artificial objectives.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Multi-Disciplinary Design Optimization of Hypersonic Air-Breathing Vehicle
NASA Astrophysics Data System (ADS)
Wu, Peng; Tang, Zhili; Sheng, Jianda
2016-06-01
A 2D hypersonic vehicle shape with an idealized scramjet is designed at a cruise regime: Mach number (Ma) = 8.0, Angle of attack (AOA) = 0 deg and altitude (H) = 30kms. Then a multi-objective design optimization of the 2D vehicle is carried out by using a Pareto Non-dominated Sorting Genetic Algorithm II (NSGA-II). In the optimization process, the flow around the air-breathing vehicle is simulated by inviscid Euler equations using FLUENT software and the combustion in the combustor is modeled by a methodology based on the well known combination effects of area-varying pipe flow and heat transfer pipe flow. Optimization results reveal tradeoffs among total pressure recovery coefficient of forebody, lift to drag ratio of vehicle, specific impulse of scramjet engine and the maximum temperature on the surface of vehicle.
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-06-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu
2016-04-01
Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non
NASA Astrophysics Data System (ADS)
Cheng, C. L.
2015-12-01
Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation Chung-Lien Cheng, Wen-Ping Tsai, Fi-John Chang* Department of Bioenvironmental Systems Engineering, National Taiwan University, Da-An District, Taipei 10617, Taiwan, ROC.Corresponding author: Fi-John Chang (changfj@ntu.edu.tw) AbstractIn Taiwan, the population growth and economic development has led to considerable and increasing demands for natural water resources in the last decades. Under such condition, water shortage problems have frequently occurred in northern Taiwan in recent years such that water is usually transferred from irrigation sectors to public sectors during drought periods. Facing the uneven spatial and temporal distribution of water resources and the problems of increasing water shortages, it is a primary and critical issue to simultaneously satisfy multiple water uses through adequate reservoir operations for sustainable water resources management. Therefore, we intend to build an intelligent reservoir operation system for the assessment of agricultural water resources management strategy in response to food security during drought periods. This study first uses the grey system to forecast the agricultural water demand during February and April for assessing future agricultural water demands. In the second part, we build an intelligent water resources system by using the non-dominated sorting genetic algorithm-II (NSGA-II), an optimization tool, for searching the water allocation series based on different water demand scenarios created from the first part to optimize the water supply operation for different water sectors. The results can be a reference guide for adequate agricultural water resources management during drought periods. Keywords: Non-dominated sorting genetic algorithm-II (NSGA-II); Grey System; Optimization; Agricultural Water Resources Management.
Multi-objective optimization for deepwater dynamic umbilical installation analysis
NASA Astrophysics Data System (ADS)
Yang, HeZhen; Wang, AiJun; Li, HuaJun
2012-08-01
We suggest a method of multi-objective optimization based on approximation model for dynamic umbilical installation. The optimization aims to find out the most cost effective size, quantity and location of buoyancy modules for umbilical installation while maintaining structural safety. The approximation model is constructed by the design of experiment (DOE) sampling and is utilized to solve the problem of time-consuming analyses. The non-linear dynamic analyses considering environmental loadings are executed on these sample points from DOE. Non-dominated Sorting Genetic Algorithm (NSGA-II) is employed to obtain the Pareto solution set through an evolutionary optimization process. Intuitionist fuzzy set theory is applied for selecting the best compromise solution from Pareto set. The optimization results indicate this optimization strategy with approximation model and multiple attribute decision-making method is valid, and provide the optimal deployment method for deepwater dynamic umbilical buoyancy modules.
GEOFIM: A WebGIS application for integrated geophysical modeling in active volcanic regions
NASA Astrophysics Data System (ADS)
Currenti, Gilda; Napoli, Rosalba; Sicali, Antonino; Greco, Filippo; Negro, Ciro Del
2014-09-01
We present GEOFIM (GEOphysical Forward/Inverse Modeling), a WebGIS application for integrated interpretation of multiparametric geophysical observations. It has been developed to jointly interpret scalar and vector magnetic data, gravity data, as well as geodetic data, from GPS, tiltmeter, strainmeter and InSAR observations, recorded in active volcanic areas. GEOFIM gathers a library of analytical solutions, which provides an estimate of the geophysical signals due to perturbations in the thermal and stress state of the volcano. The integrated geophysical modeling can be performed by a simple trial and errors forward modeling or by an inversion procedure based on NSGA-II algorithm. The software capability was tested on the multiparametric data set recorded during the 2008-2009 Etna flank eruption onset. The results encourage to exploit this approach to develop a near-real-time warning system for a quantitative model-based assessment of geophysical observations in areas where different parameters are routinely monitored.
A parametric optimization procedure for the suction system of reciprocating compressors
NASA Astrophysics Data System (ADS)
Ferreira, W. M.; Silva, E.; Deschamps, C. J.
2015-08-01
The design of the suction system of compressors is of fundamental importance for efficiency and reliability. This paper reports a method developed to optimize the suction system of a reciprocating compressor, by using the genetic algorithm NSGA-II. The isentropic and volumetric efficiencies are used as objective functions, while the bending fatigue stress is used as a constraint to meet valve reliability. A simulation model of the compression cycle was coupled to the optimization procedure, with correlations for flow and force effective areas in terms of geometric parameters of the suction valve. Valve dynamics was numerically solved via the finite element method. The proposed optimization procedure was applied to a reciprocating compressor adopted for household refrigeration, identifying suction system geometries more efficient than the original design.
Fatigue design of a cellular phone folder using regression model-based multi-objective optimization
NASA Astrophysics Data System (ADS)
Kim, Young Gyun; Lee, Jongsoo
2016-08-01
In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.
Calibrating a Rainfall-Runoff and Routing Model for the Continental United States
NASA Astrophysics Data System (ADS)
Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.
2014-12-01
Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.
Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra
2016-04-15
In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume. PMID:26849322
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-11-01
This is a parallelized algorithm performing a decomposition of a noisy time series into a number of sinusoidal components. The algorithm analyses all suspicious periodicities that can be revealed, including the ones that look like an alias or noise at a glance, but later may prove to be a real variation. After the selection of the initial candidates, the algorithm performs a complete pass through all their possible combinations and computes the rigorous multifrequency statistical significance for each such frequency tuple. The largest combinations that still survived this thresholding procedure represent the outcome of the analysis.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Multiobjective training of artificial neural networks for rainfall-runoff modeling
NASA Astrophysics Data System (ADS)
de Vos, N. J.; Rientjes, T. H. M.
2008-08-01
This paper presents results on the application of various optimization algorithms for the training of artificial neural network rainfall-runoff models. Multilayered feed-forward networks for forecasting discharge from two mesoscale catchments in different climatic regions have been developed for this purpose. The performances of the multiobjective algorithms Multi Objective Shuffled Complex Evolution Metropolis-University of Arizona (MOSCEM-UA) and Nondominated Sorting Genetic Algorithm II (NSGA-II) have been compared to the single-objective Levenberg-Marquardt and Genetic Algorithm for training of these models. Performance has been evaluated by means of a number of commonly applied objective functions and also by investigating the internal weights of the networks. Additionally, the effectiveness of a new objective function called mean squared derivative error, which penalizes models for timing errors and noisy signals, has been explored. The results show that the multiobjective algorithms give competitive results compared to the single-objective ones. Performance measures and posterior weight distributions of the various algorithms suggest that multiobjective algorithms are more consistent in finding good optima than are single-objective algorithms. However, results also show that it is difficult to conclude if any of the algorithms is superior in terms of accuracy, consistency, and reliability. Besides the training algorithm, network performance is also shown to be sensitive to the choice of objective function(s), and including more than one objective function proves to be helpful in constraining the neural network training.
The objective of this study was to evaluate the capability of an expert system described in the previous paper (Bradbury et al., 2000; Toxicol. Sci.) to identify the potential for chemicals to act as ligands of mammalian estrogen receptors (ERs). The basis of that algorithm was a...
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Astrophysics Data System (ADS)
Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula
The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently
NASA Astrophysics Data System (ADS)
Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel
2005-12-01
SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.
Richard, M.J.
1987-01-01
An efficient methodology for using commercial flowsheeting programs with advanced mathematical programming algorithms was developed for the optimization of operating plants. The methodology was demonstrated and validated using ChemShare Corporation's DESIGN/2000 simulation of the Freeport Chemical Company's plant for sulfuric acid manufacture and three nonlinear programming techniques: successive linear programming, successive quadratic programming, and the generalized reduced-gradient method. The application of this methodology begins with the development of a feasible base-case simulation. Partial derivatives of the economic model and constraint equations are computed using fully converged simulations. This information is used to formulate an optimization problem that can be solved with the NLP algorithms giving improved values of the economic model. A line search is constructed through the point found from the nonlinear programming algorithm to find the best feasible point to repeat the procedure. The procedure is repeated using the ChemShare simulation program and the NLP code until convergence criteria are met. This method was applied to three flowsheeting problems; a plant-scale-contact sulfuric acid process model, a packed-bed-reactor design model, and an adiabatic-flash problem.
Coastal aquifer management based on surrogate models and multi-objective optimization
NASA Astrophysics Data System (ADS)
Mantoglou, A.; Kourakos, G.
2011-12-01
is capable of solving complex multi-objective optimization problems effectively with significant reduction in computational time compared to previous methods (it requires only 5% of the NSGA -II algorithm time). Further, as indicated in the figure below, the Pareto solution obtained by the much faster MOSA(MNN) algorithm, is better than the solution obtained by the NSGA-II algorithm.
Abulencia, A.; Adelman, J.; Affolder, Anthony Allen; Akimoto, T.; Albrow, Michael G.; Ambrose, D.; Amerio, S.; Amidei, Dante E.; Anastassov, A.; Anikeev, Konstantin; Annovi, A.; /Frascati /Comenius U.
2007-01-01
The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.
TVFMCATS. Time Variant Floating Mean Counting Algorithm
Huffman, R.K.
1999-05-01
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.
Time Variant Floating Mean Counting Algorithm
Energy Science and Technology Software Center (ESTSC)
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Bravo, Ignacio; Gardel, Alfredo; Rodriguez, David
2016-01-01
This paper focuses on optimal sensor deployment for indoor localization with a multi-objective evolutionary algorithm. Our goal is to obtain an algorithm to deploy sensors taking the number of sensors, accuracy and coverage into account. Contrary to most works in the literature, we consider the presence of obstacles in the region of interest (ROI) that can cause occlusions between the target and some sensors. In addition, we aim to obtain all of the Pareto optimal solutions regarding the number of sensors, coverage and accuracy. To deal with a variable number of sensors, we add speciation and structural mutations to the well-known non-dominated sorting genetic algorithm (NSGA-II). Speciation allows one to keep the evolution of sensor sets under control and to apply genetic operators to them so that they compete with other sets of the same size. We show some case studies of the sensor placement of an infrared range-difference indoor positioning system with a fairly complex model of the error of the measurements. The results obtained by our algorithm are compared to sensor placement patterns obtained with random deployment to highlight the relevance of using such a deployment algorithm. PMID:27338414
A Multiobjective Approach to Homography Estimation.
Osuna-Enciso, Valentín; Cuevas, Erik; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel
2016-01-01
In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532
A Multiobjective Approach to Homography Estimation
Osuna-Enciso, Valentín; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel
2016-01-01
In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532
Robust Multiobjective Controllability of Complex Neuronal Networks.
Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen
2016-01-01
This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc. PMID:26441452
Optimisation of Shape Parameters and Process Manufacturing for an Automotive Safety Part
NASA Astrophysics Data System (ADS)
Gildemyn, Eric; Dal Santo, Philippe; Potiron, Alain; Saïdane, Delphine
2007-05-01
In recent years, the weight and the cost of automotive vehicles have considerably increased due to the importance devoted to safety systems. It is therefore necessary to reduce the weight and the production cost of components by improving their shape and manufacturing process. This work deals with a numerical approach for optimizing the manufacturing process parameters of a safety belt anchor using a genetic algorithm (NSGA II). This type of component is typically manufactured in three stages: blanking, rounding of the edges by punching and finally, bending with a 90° angle. In this study, only the rounding and the bending will be treated. The numerical model is linked to the genetic algorithm in order to optimize the process parameters. This is implemented by using ABAQUSscript files developed in the Python programming language. The algorithm modifies the script files and restarts the FEM analysis automatically. Lemaitre's damage model is introduced in the material behaviour laws and implemented in the FEM analysis by using a FORTRAN subroutine. The influence of two process parameters (die radius and the rounding punch radius) and five shape parameters were investigated. The objective functions are (i) the material damage state at the end of the forming process, (ii) the stress field and (iii) the maximum Von Mises stress in the folded zone.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
An archived multi-objective simulated annealing for a dynamic cellular manufacturing system
NASA Astrophysics Data System (ADS)
Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza
2014-05-01
To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.
NASA Astrophysics Data System (ADS)
Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu
2016-03-01
Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.
A game theoretic approach for trading discharge permits in rivers.
Niksokhan, Mohammad Hossein; Kerachian, Reza; Karamouz, Mohammad
2009-01-01
In this paper, a new Cooperative Trading Discharge Permit (CTDP) methodology is designed for estimating equitable and efficient treatment cost allocation among dischargers in a river system considering their conflicting interests. The methodology consists of two main steps: (1) initial treatment cost allocation and (2) equitable treatment cost reallocation. In the first step, a Pareto front among objectives is developed using a powerful and recently developed multi-objective genetic algorithm known as Nondominated Sorting Genetic Algorithm-II (NSGA-II). The objectives of the optimization model are considered to be the average treatment level of dischargers and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. The best non-dominated solution on the Pareto front, which provides the initial cost allocation to dischargers, is selected using the Young Bargaining Theory (YBT). In the second step, some cooperative game theoretic approaches are utilized to investigate how the maximum saving cost of participating dischargers in a coalition can be fairly allocated to them. The final treatment cost allocation provides the optimal trading discharge permit policies. The practical utility of the proposed methodology for river water quality management is illustrated through a realistic case study of the Zarjub river in the northern part of Iran. PMID:19657175
A stochastic conflict resolution model for trading pollutant discharge permits in river systems.
Niksokhan, Mohammad Hossein; Kerachian, Reza; Amin, Pedram
2009-07-01
This paper presents an efficient methodology for developing pollutant discharge permit trading in river systems considering the conflict of interests of involving decision-makers and the stakeholders. In this methodology, a trade-off curve between objectives is developed using a powerful and recently developed multi-objective genetic algorithm technique known as the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The best non-dominated solution on the trade-off curve is defined using the Young conflict resolution theory, which considers the utility functions of decision makers and stakeholders of the system. These utility functions are related to the total treatment cost and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. Finally, an optimization model provides the trading discharge permit policies. The practical utility of the proposed methodology in decision-making is illustrated through a realistic example of the Zarjub River in the northern part of Iran. PMID:18592387
NASA Astrophysics Data System (ADS)
Deb, Kousik; Dhar, Anirban; Purohit, Sandip
2016-02-01
Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.
An optimized resistor pattern for temperature gradient control in microfluidics
NASA Astrophysics Data System (ADS)
Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline
2009-06-01
In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.
NASA Astrophysics Data System (ADS)
Brochero, D.; Anctil, F.; Gagné, C.
2012-04-01
Today, the availability of the Meteorological Ensemble Prediction Systems (MEPS) and its subsequent coupling with multiple hydrological models offer the possibility of building Hydrological Ensemble Prediction Systems (HEPS) consisting of a large number of members. However, this task is complex both in terms of the coupling of information and of the computational time, which may create an operational barrier. The evaluation of the prominence of each hydrological members can be seen as a non-parametric post-processing stage that seeks finding the optimal participation of the hydrological models (in a fashion similar to the Bayesian model averaging technique), maintaining or improving the quality of a probabilistic forecasts based on only x members drawn from a super ensemble of d members, thus allowing the reduction of the task required to issue the probabilistic forecast. The main objective of the current work consists in assessing the degree of simplification (reduction of the number of hydrological members) that can be achieved with a HEPS configured using 16 lumped hydrological models driven by the 50 weather ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF), i.e. an 800-member HEPS. In a previous work (Brochero et al., 2011a, b), we demonstrated that the proportion of members allocated to each hydrological model is a sufficient criterion to reduce the number of hydrological members while improving the balance of the scores, taking into account interchangeability of the ECMWF MEPS. Here, we compare the proportion of members allocated to each hydrological model derived from three non-parametric techniques: correlation analysis of hydrological members, Backward Greedy Selection (BGS) and Nondominated Sorting Genetic Algorithm (NSGA II). The last two techniques allude to techniques developed in machine learning, in a multicriteria framework exploiting the relationship between bias, reliability, and the number of members of the
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. PMID:23664205
Modeling and optimization of a multi-product biosynthesis factory for multiple objectives.
Lee, Fook Choon; Pandu Rangaiah, Gade; Lee, Dong-Yup
2010-05-01
Genetic algorithms and optimization in general, enable us to probe deeper into the metabolic pathway recipe for multi-product biosynthesis. An augmented model for optimizing serine and tryptophan flux ratios simultaneously in Escherichia coli, was developed by linking the dynamic tryptophan operon model and aromatic amino acid-tryptophan biosynthesis pathways to the central carbon metabolism model. Six new kinetic parameters of the augmented model were estimated with considerations of available experimental data and other published works. Major differences between calculated and reference concentrations and fluxes were explained. Sensitivities and underlying competition among fluxes for carbon sources were consistent with intuitive expectations based on metabolic network and previous results. Biosynthesis rates of serine and tryptophan were simultaneously maximized using the augmented model via concurrent gene knockout and manipulation. The optimization results were obtained using the elitist non-dominant sorting genetic algorithm (NSGA-II) supported by pattern recognition heuristics. A range of Pareto-optimal enzyme activities regulating the amino acids biosynthesis was successfully obtained and elucidated wherever possible vis-à-vis fermentation work based on recombinant DNA technology. The predicted potential improvements in various metabolic pathway recipes using the multi-objective optimization strategy were highlighted and discussed in detail. PMID:20051269
A preference-based multi-objective model for the optimization of best management practices
NASA Astrophysics Data System (ADS)
Chen, Lei; Qiu, Jiali; Wei, Guoyuan; Shen, Zhenyao
2015-01-01
The optimization of best management practices (BMPs) at the watershed scale is notably complex because of the social nature of decision process, which incorporates information that reflects the preferences of decision makers. In this study, a preference-based multi-objective model was designed by modifying the commonly-used Non-dominated Sorting Genetic Algorithm (NSGA-II). The reference points, achievement scalarizing functions and an indicator-based optimization principle were integrated for searching a set of preferred Pareto-optimality solutions. Pareto preference ordering was also used for reducing objective numbers in the final decision-making process. This proposed model was then tested in a typical watershed in the Three Gorges Region, China. The results indicated that more desirable solutions were generated, which reduced the burden of decision effort of watershed managers. Compare to traditional Genetic Algorithm (GA), those preferred solutions were concentrated in a narrow region close to the projection point instead of the entire Pareto-front. Based on Pareto preference ordering, the solutions with the best objective function values were often the more desirable solutions (i.e., the minimum cost solution and the minimum pollutant load solution). In the authors' view, this new model provides a useful tool for optimizing BMPs at watershed scale and is therefore of great benefit to watershed managers.
NASA Astrophysics Data System (ADS)
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
LID-BMPs planning for urban runoff control and the case study in China.
Jia, Haifeng; Yao, Hairong; Tang, Ying; Yu, Shaw L; Field, Richard; Tafuri, Anthony N
2015-02-01
Low Impact Development Best Management Practices (LID-BMPs) have in recent years received much recognition as cost-effective measures for mitigating urban runoff impacts. In the present paper, a procedure for LID-BMPs planning and analysis using a comprehensive decision support tool was proposed. A case study was conducted to the planning of an LID-BMPs implementation effort at a college campus in Foshan, Guangdong Province, China. By examining information obtained, potential LID-BMPs were first selected. SUSTAIN was then used to analyze four runoff control scenarios, namely: pre-development scenario; basic scenario (existing campus development plan without BMP control); Scenario 1 (least-cost BMPs implementation); and, Scenario 2 (maximized BMPs performance). A sensitivity analysis was also performed to assess the impact of the hydrologic and water quality parameters. The optimal solution for each of the two LID-BMPs scenarios was obtained by using the non-dominated sorting genetic algorithm-II (NSGA-II). Finally, the cost-effectiveness of the LID-BMPs implementation scenarios was examined by determining the incremental cost for a unit improvement of control. PMID:25463572
A niched Pareto tabu search for multi-objective optimal design of groundwater remediation systems
NASA Astrophysics Data System (ADS)
Yang, Yun; Wu, Jianfeng; Sun, Xiaomin; Wu, Jichun; Zheng, Chunmiao
2013-05-01
This study presents a new multi-objective optimization method, the niched Pareto tabu search (NPTS), for optimal design of groundwater remediation systems. The proposed NPTS is then coupled with the commonly used flow and transport code, MODFLOW and MT3DMS, to search for the near Pareto-optimal tradeoffs of groundwater remediation strategies. The difference between the proposed NPTS and the existing multiple objective tabu search (MOTS) lies in the use of the niche selection strategy and fitness archiving to maintain the diversity of the optimal solutions along the Pareto front and avoid repetitive calculations of the objective functions associated with the flow and transport model. Sensitivity analysis of the NPTS parameters is evaluated through a synthetic pump-and-treat remediation application involving two conflicting objectives, minimizations of both remediation cost and contaminant mass remaining in the aquifer. Moreover, the proposed NPTS is applied to a large-scale pump-and-treat groundwater remediation system of the field site at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts, involving minimizations of both total pumping rates and contaminant mass remaining in the aquifer. Additional comparison of the results based on the NPTS with those obtained from other two methods, namely the single objective tabu search (SOTS) and the nondominated sorting genetic algorithm II (NSGA-II), further indicates that the proposed NPTS has desirable computation efficiency, stability, and robustness and is a promising tool for optimizing the multi-objective design of groundwater remediation systems.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Solving molecular docking problems with multi-objective metaheuristics.
García-Godoy, María Jesús; López-Camacho, Esteban; García-Nieto, José; Aldana-Montes, Antonio J Nebroand José F
2015-01-01
Molecular docking is a hard optimization problem that has been tackled in the past with metaheuristics, demonstrating new and challenging results when looking for one objective: the minimum binding energy. However, only a few papers can be found in the literature that deal with this problem by means of a multi-objective approach, and no experimental comparisons have been made in order to clarify which of them has the best overall performance. In this paper, we use and compare, for the first time, a set of representative multi-objective optimization algorithms applied to solve complex molecular docking problems. The approach followed is focused on optimizing the intermolecular and intramolecular energies as two main objectives to minimize. Specifically, these algorithms are: two variants of the non-dominated sorting genetic algorithm II (NSGA-II), speed modulation multi-objective particle swarm optimization (SMPSO), third evolution step of generalized differential evolution (GDE3), multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S-metric evolutionary multi-objective optimization (SMS-EMOA). We assess the performance of the algorithms by applying quality indicators intended to measure convergence and the diversity of the generated Pareto front approximations. We carry out a comparison with another reference mono-objective algorithm in the problem domain (Lamarckian genetic algorithm (LGA) provided by the AutoDock tool). Furthermore, the ligand binding site and molecular interactions of computed solutions are analyzed, showing promising results for the multi-objective approaches. In addition, a case study of application for aeroplysinin-1 is performed, showing the effectiveness of our multi-objective approach in drug discovery. PMID:26042856
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center (ESTSC)
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Atmospheric Science Data Center
2013-07-10
... algorithms from SAGE III v4.00 Ceased removal of the water vapor extinction in the 600nm channel due to uncertainty in the H2O spectroscopy in this spectral band Updated our estimation of the SAGE II ...
ERIC Educational Resources Information Center
Lazewnik, Grainom
This document comprises the first part of the section of the Noun Reference Dictionary concerned with nouns derived from verb roots. See AL 002 270 for Part II. The format of this section is the same as that described in AL 002 267 for the pure nominal section of the dictionary. Roots are indicated. For other related documents, see ED 019 668, AL…
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504
Hydraulic design of a low-specific speed Francis runner for a hydraulic cooling tower
NASA Astrophysics Data System (ADS)
Ruan, H.; Luo, X. Q.; Liao, W. L.; Zhao, Y. P.
2012-11-01
The air blower in a cooling tower is normally driven by an electromotor, and the electric energy consumed by the electromotor is tremendous. The remaining energy at the outlet of the cooling cycle is considerable. This energy can be utilized to drive a hydraulic turbine and consequently to rotate the air blower. The purpose of this project is to recycle energy, lower energy consumption and reduce pollutant discharge. Firstly, a two-order polynomial is proposed to describe the blade setting angle distribution law along the meridional streamline in the streamline equation. The runner is designed by the point-to-point integration method with a specific blade setting angle distribution. Three different ultra-low-specificspeed Francis runners with different wrap angles are obtained in this method. Secondly, based on CFD numerical simulations, the effects of blade setting angle distribution on pressure coefficient distribution and relative efficiency have been analyzed. Finally, blade angles of inlet and outlet and control coefficients of blade setting angle distribution law are optimal variables, efficiency and minimum pressure are objective functions, adopting NSGA-II algorithm, a multi-objective optimization for ultra-low-specific speed Francis runner is carried out. The obtained results show that the optimal runner has higher efficiency and better cavitation performance.
Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao
2016-01-01
As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854
Multimethod evolutionary search for the regional calibration of rainfall-runoff models
NASA Astrophysics Data System (ADS)
Lombardi, Laura; Castiglioni, Simone; Toth, Elena; Castellarin, Attilio; Montanari, Alberto
2010-05-01
The study focuses on regional calibration for a generic rainfall-runoff model. The maximum likelihood function in the spectral domain proposed by Whittle is approximated in the time domain by maximising the simultaneous fit (through a multiobjective optimisation) of selected statistics of streamflow values, with the aim to propose a calibration procedure that can be applied at regional scale. The method may in fact be applied without the availability of actual time series of streamflow observations, since it is based exclusively on the selected statistics, that are here obtained on the basis of the dominant climate and catchment characteristics, through regional regression relationships. The multiobjective optimisation was carried out by using a recently proposed multimethod evolutionary search algorithm (AMALGAM, Vrugt and Robinson, 2007), that runs simultaneously, for population evolution, a set of different optimisation methods (namely NSGA-II, Differential Evolution, Adaptive Metropolis Search and Particle Swarm Optimisation), resulting in a combination of the respective strengths by adaptively updating the weights of these individual methods based on their reproductive success. This ensures a fast, reliable and computationally efficient solution to multiobjective optimisation problems. The proposed technique is applied to the case study of some catchments located in central Italy, which are treated as ungauged and are located in a region where detailed hydrological and geomorfoclimatic information is available. The results obtained with the regional calibration are compared with those provided by a classical least squares calibration in the time domain. The outcomes of the analysis confirm the potentialities of the proposed methodology.
Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation
NASA Astrophysics Data System (ADS)
Bai, T.; Jin, W.
2015-12-01
Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.
Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost
NASA Astrophysics Data System (ADS)
Li, Yanhe
A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.
Sweetapple, Christine; Fu, Guangtao; Butler, David
2014-05-15
This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. PMID:24602860
Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao
2016-01-01
As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854
Evolutionary multiobjective design of a flexible caudal fin for robotic fish.
Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K
2015-12-01
Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives. PMID:26601975
NASA Astrophysics Data System (ADS)
Ibanez, Eduardo
Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.
Chen, Zhenhua; Chen, Xun; Wu, Wei
2013-04-28
In this paper, by applying the reduced density matrix (RDM) approach for nonorthogonal orbitals developed in the first paper of this series, efficient algorithms for matrix elements between VB structures and energy gradients in valence bond self-consistent field (VBSCF) method were presented. Both algorithms scale only as nm(4) for integral transformation and d(2)n(β)(2) for VB matrix elements and 3-RDM evaluation, while the computational costs of other procedures are negligible, where n, m, d, and n(β )are the numbers of variable occupied active orbitals, basis functions, determinants, and active β electrons, respectively. Using tensor properties of the energy gradients with respect to the orbital coefficients presented in the first paper of this series, a partial orthogonal auxiliary orbital set was introduced to reduce the computational cost of VBSCF calculation in which orbitals are flexibly defined. Test calculations on the Diels-Alder reaction of butadiene and ethylene have shown that the novel algorithm is very efficient for VBSCF calculations. PMID:23635124
Library of Continuation Algorithms
Energy Science and Technology Software Center (ESTSC)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Preface to special section on ILAS-II: The Improved Limb Atmospheric Spectrometer-II
NASA Astrophysics Data System (ADS)
Nakajima, Hideaki
2006-10-01
The Improved Limb Atmospheric Spectrometer-II (ILAS-II) was a solar-occultation satellite sensor designed to measure minor constituents associated with polar ozone depletion. ILAS-II was placed on board the Advanced Earth Observing Satellite-II (ADEOS-II, "Midori-II"), which was successfully launched on 14 December 2002 from the Tanegashima Space Center of the Japan Aerospace Exploration Agency (JAXA). After an initial check of the instruments, ILAS-II made routine measurements for about 7 months, from 2 April 2003 to 24 October 2003, a period that included the formation and collapse of an Antarctic ozone hole in 2003, one of the largest in history. This paper introduces a special section containing papers on ILAS-II instrumental and on-orbit characteristics, several validation results of ILAS-II data processed with the version 1.4 data processing algorithm, and scientific analyses of polar stratospheric chemistry and dynamics using ILAS-II data.
NASA Astrophysics Data System (ADS)
Yates, David N.; Warner, Thomas T.; Leavesley, George H.
2000-06-01
Three techniques were employed for the estimation and prediction of precipitation from a thunderstorm that produced a flash flood in the Buffalo Creek watershed located in the mountainous Front Range near Denver, Colorado, on 12 July 1996. The techniques included 1) quantitative precipitation estimation using the National Weather Service's Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research's S-band, dual-polarization radars, 2) quantitative precipitation forecasting utilizing a dynamic model, and 3) quantitative precipitation forecasting using an automated algorithmic system for tracking thunderstorms. Rainfall data provided by these various techniques at short timescales (6 min) and at fine spatial resolutions (150 m to 2 km) served as input to a distributed-parameter hydrologic model for analysis of the flash flood. The quantitative precipitation estimates from the weather radar demonstrated their ability to aid in simulating a watershed's response to precipitation forcing from small-scale, convective weather in complex terrain. That is, with the radar-based quantitative precipitation estimates employed as input, the simulated peak discharge was similar to that estimated. The dynamic model showed the most promise in providing a significant forecast lead time for this flash-flood event. The algorithmic system did not show as much skill in comparison with the dynamic model in providing precipitation forcing to the hydrologic model. The discharge forecasts based on the dynamic-model and algorithmic-system inputs point to the need to improve the ability to forecast convective storms, especially if models such as these eventually are to be used in operational flood forecasting.
NASA Technical Reports Server (NTRS)
1959-01-01
The Juno II launch vehicle, shown here, was a modified Jupiter Intermediate-Range Ballistic missionile, developed by Dr. Wernher von Braun and the rocket team at Redstone Arsenal in Huntsville, Alabama. Between December 1958 and April 1961, the Juno II launched space probes Pioneer III and IV, as well as Explorer satellites VII, VIII and XI.
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Kirk, R.L.
1987-01-01
Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.
James Barber
2010-09-01
James Barber, Ernst Chain Professor of Biochemistry at Imperial College, London, gives a BSA Distinguished Lecture titled, "The Structure and Function of Photosystem II: The Water-Splitting Enzyme of Photosynthesis."
Applying a Genetic Algorithm to Reconfigurable Hardware
NASA Technical Reports Server (NTRS)
Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim
2004-01-01
This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.
NASA Astrophysics Data System (ADS)
Hedayatrasa, Saeid; Abhary, Kazem; Uddin, Mohammad; Ng, Ching-Tai
2016-04-01
This paper presents a topology optimization of single material phononic crystal plate (PhP) to be produced by perforation of a uniform background plate. The primary objective of this optimization study is to explore widest exclusive bandgaps of fundamental (first order) symmetric or asymmetric guided wave modes as well as widest complete bandgap of mixed wave modes (symmetric and asymmetric). However, in the case of single material porous phononic crystals the bandgap width essentially depends on the resultant structural integration introduced by achieved unitcell topology. Thinner connections of scattering segments (i.e. lower effective stiffness) generally lead to (i) wider bandgap due to enhanced interfacial reflections, and (ii) lower bandgap frequency range due to lower wave speed. In other words higher relative bandgap width (RBW) is produced by topology with lower effective stiffness. Hence in order to study the bandgap efficiency of PhP unitcell with respect to its structural worthiness, the in-plane stiffness is incorporated in optimization algorithm as an opposing objective to be maximized. Thick and relatively thin Polysilicon PhP unitcells with square symmetry are studied. Non-dominated sorting genetic algorithm NSGA-II is employed for this multi-objective optimization problem and modal band analysis of individual topologies is performed through finite element method. Specialized topology initiation, evaluation and filtering are applied to achieve refined feasible topologies without penalizing the randomness of genetic algorithm (GA) and diversity of search space. Selected Pareto topologies are presented and gradient of RBW and elastic properties in between the two Pareto front extremes are investigated. Chosen intermediate Pareto topology, even not extreme topology with widest bandgap, show superior bandgap efficiency compared with the results reported in other works on widest bandgap topology of asymmetric guided waves, available in the literature
The BRUSH algorithm for two-electron integrals on GPU
NASA Astrophysics Data System (ADS)
Rák, Ádám; Cserey, György
2015-02-01
This Letter presents a new algorithmic method developed to evaluate two-electron repulsion integrals based on contracted Gaussian basis functions in a parallel way. This new algorithm scheme provides distinct SIMD (single instruction multiple data) optimized paths which symbolically transforms integral parameters into target integral algorithms. Our measurements indicate that the method gives a significant improvement over the CPU-friendly PRISM algorithm. The benchmark tests (evaluation of more than 108 integrals using the STO-3G basis set) of our GPU (NVIDIA GTX 780) implementation showed up to 750-fold speedup compared to a single core of Athlon II X4 635 CPU.
Current status and early result of the ILAS-II onboard the ADEOS-II satellite
NASA Astrophysics Data System (ADS)
Nakajima, H.; Sugita, T.; Yokota, T.; Kanzawa, H.; Kobayashi, H.; Sasano, Y.
2003-04-01
The Improved Limb Atmospheric Spectrometer-II (ILAS-II) onboard the Advanced Earth Observing Satellite-II (ADEOS-II) was successfully launched on 14 December, 2002 from NASDA's Tanegashima Space Center. ILAS-II is a solar-occultation atmospheric sensor which will measure vertical profiles of O_3, HNO_3, NO_2, N_2O, CH_4, H_2O, ClONO_2, aerosol extinction coefficients etc. with four grating spectrometers. After the initial checkout of the ILAS-II which is scheduled in January-February, 2003, ILAS-II will make routine measurements from early April. A validation campaign is scheduled to be taken place in Kiruna, Sweden in which several balloon-borne measurements are planned. Preliminary data from ILAS-II on both northern and southern polar regions using the latest data retrieval algorithm will be presented.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Atmospheric Science Data Center
2016-02-16
... of stratospheric aerosols, ozone, nitrogen dioxide, water vapor and cloud occurrence by mapping vertical profiles and calculating ... (i.e. MLS and SAGE III versus HALOE) Fixed various bugs Details are in the SAGE II V7.00 Release Notes . ...
NASA Technical Reports Server (NTRS)
1959-01-01
Wernher von Braun and his team were responsible for the Jupiter-C hardware. The family of launch vehicles developed by the team also came to include the Juno II, which was used to launch the Pioneer IV satellite on March 3, 1959. Pioneer IV passed within 37,000 miles of the Moon before going into solar orbit.
ERIC Educational Resources Information Center
Allegheny County Community Coll., Pittsburgh, PA.
Instructional objectives and performance requirements are outlined in this course guide for Welding II, a performance-based course offered at the Community College of Allegheny County to introduce students to out-of-position shielded arc welding with emphasis on proper heats, electrode selection, and alternating/direct currents. After introductory…
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
NASA Astrophysics Data System (ADS)
Dumedah, G.; Berg, A. A.; Wineberg, M.
2009-12-01
Hydrological models are increasingly been calibrated using multi-objective genetic algorithms (GAs). Multi-objective GAs facilitate the evaluation of several model evaluation objectives and the examination of massive combinations of parameter sets. Usually, the outcome is a set of several equally-accurate parameter sets which make-up a trade-off surface between the objective functions often referred to as Pareto set. The Pareto set describes a decision-front in a way that each solution has unique values in parameter space with competing accuracy in objective space. An automated framework of choosing a single from such a trade-off surface has not been thoroughly investigated in the model calibration literature. As a result, this presentation will demonstrate an automated selection of robust solutions from a trade-off surface using the distribution of solutions in both objective space and parameter space. The trade-off surface was generated using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to calibrate the Soil and Water Assessment Tool (SWAT) for streamflow simulation based on model bias and root mean square error. Our selection method generates solutions with unique properties including a representative pathway in parameter space, a basin of attraction or the center of mass in objective space, and a proximity to the origin in objective space. Additionally, our framework determines a robust solution as a balanced compromise for the distribution of solutions in objective space and parameter space. That is, the robust solution emphasizes stability in model parameter values and in objective function values in a way that similarity in parameter space implies similarity in objective space.
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Multi-Objective Calibration of Hydrological Model Parameters Using MOSCEM-UA
NASA Astrophysics Data System (ADS)
Wang, Yuhui; Lei, Xiaohui; Jiang, Yunzhong; Wang, Hao
2010-05-01
In the past two decades, many evolutionary algorithms have been adopted in the auto-calibration of hydrological model such as NSGA-II, SCEM, etc., some of which has shown ideal performance. In this article, a detailed hydrological model auto-calibration algorithm Multi-objective Shuffled Complex Evolution Metropolis (MOSCEM-UA) has been introduced to carry out auto-calibration of hydrological model in order to clarify the equilibrium and the uncertainty of model parameters. The development and the implement flow chart of the advanced multi-objective algorithm (MOSCEM-UA) were interpreted in detail. Hymod, a conceptual hydrological model depending on Moore's concept, was then introduced as a lumped Rain-Runoff simulation approach with several principal parameters involved. The five important model parameters subjected to calibration includes maximum storage capacity, spatial variability of the soil moisture capacity, flow distributing factor between slow and quick reservoirs as well as slow tank and quick tank distribution factor. In this study, a test case on the up-stream area of KuanCheng hydrometric station in Haihe basin was studied to verify the performance of calibration. Two objectives including objective for high flow process and objective for low flow process are chosen in the process of calibration. The results emphasized that the interrelationship between objective functions could be described in correlation Pareto Front by using MOSCEM-UA. The Pareto Front can be draw after the iteration. Further more, post range of parameters corresponding to Pareto sets could also be drawn to identify the prediction range of the model. Then a set of balanced parameter was chosen to validate the model and the result showed an ideal prediction. Meanwhile, the correlation among parameters and their effects on the model performance could also be achieved.
NASA Technical Reports Server (NTRS)
Muniz, Beau
2009-01-01
One unique project that the Prototype lab worked on was PORT I (Post-landing Orion Recovery Test). PORT is designed to test and develop the system and components needed to recover the Orion capsule once it splashes down in the ocean. PORT II is designated as a follow up to PORT I that will utilize a mock up pressure vessel that is spatially compar able to the final Orion capsule.
2015-08-01
Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migrate upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.
Energy Science and Technology Software Center (ESTSC)
2015-08-01
Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migratemore » upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.« less
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
Energy Science and Technology Software Center (ESTSC)
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
NASA Astrophysics Data System (ADS)
Maringanti, C.; Chaubey, I.
2009-12-01
A multi-objective genetic algorithm (NSGA-II) in combination with a watershed model (Soil and Water Assessment Tool (SWAT)) is used in an optimization framework for making the Best Management Practices (BMP) selection and placement decisions to reduce the nonpoint source (NPS) pollutants and the net cost for implementation of BMPs. Shuffled complex evolutionary metropolis uncertainty analysis (SCEM-UA) method will be used to quantify the uncertainty of the BMP selection and placement tool. The sources of input uncertainty for the tool include the uncertainties in the estimation of economic costs for the implementation of BMPs, and input SWAT model predictions at field level. The SWAT model predictions are in turn influenced by the model parameters and the input climate forcing such as precipitation and temperature which in turn are affected due to the changing climate, and the changing land use in the watershed. The optimization tool is also influenced by the operational parameters of the genetic algorithm. The SCEM-UA method will be initiated using a uniform distribution for the range of the model parameters and the input sources of uncertainty to estimate the posterior probability distribution of the model response variables. This methodology will be applied to estimate the uncertainty in the BMP selection and placement in Wildcat Creek Watershed located in northcentral Indiana. Nitrogen, phosphorus, sediment, and pesticide are the various NPS pollutants that will be reduced through implementation of BMPs in the watershed. The uncertainty bounds around the Pareto-optimal fronts after the optimization will provide the watershed management groups a clear insight on how the desired water quality goals could be realistically met for the least amount of money that is available for BMP implementation in the watershed.
Optimal design of tunable phononic bandgap plates under equibiaxial stretch
NASA Astrophysics Data System (ADS)
Hedayatrasa, Saeid; Abhary, Kazem; Uddin, M. S.; Guest, James K.
2016-05-01
Design and application of phononic crystal (PhCr) acoustic metamaterials has been a topic with tremendous growth of interest in the last decade due to their promising capabilities to manipulate acoustic and elastodynamic waves. Phononic controllability of waves through a particular PhCr is limited only to the spectrums located within its fixed bandgap frequency. Hence the ability to tune a PhCr is desired to add functionality over its variable bandgap frequency or for switchability. Deformation induced bandgap tunability of elastomeric PhCr solids and plates with prescribed topology have been studied by other researchers. Principally the internal stress state and distorted geometry of a deformed phononic crystal plate (PhP) changes its effective stiffness and leads to deformation induced tunability of resultant modal band structure. Thus the microstructural topology of a PhP can be altered so that specific tunability features are met through prescribed deformation. In the present study novel tunable PhPs of this kind with optimized bandgap efficiency-tunability of guided waves are computationally explored and evaluated. Low loss transmission of guided waves throughout thin walled structures makes them ideal for fabrication of low loss ultrasound devices and structural health monitoring purposes. Various tunability targets are defined to enhance or degrade complete bandgaps of plate waves through macroscopic tensile deformation. Elastomeric hyperelastic material is considered which enables recoverable micromechanical deformation under tuning finite stretch. Phononic tunability through stable deformation of phononic lattice is specifically required and so any topology showing buckling instability under assumed deformation is disregarded. Nondominated sorting genetic algorithm (GA) NSGA-II is adopted for evolutionary multiobjective topology optimization of hypothesized tunable PhP with square symmetric unit-cell and relevant topologies are analyzed through finite
Multi-objective optimization of gear forging process based on adaptive surrogate meta-models
NASA Astrophysics Data System (ADS)
Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent
2013-05-01
In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.
A new multi-objective approach to finite element model updating
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Cho, Soojin; Jung, Hyung-Jo; Lee, Jong-Jae; Yun, Chung-Bang
2014-05-01
The single objective function (SOF) has been employed for the optimization process in the conventional finite element (FE) model updating. The SOF balances the residual of multiple properties (e.g., modal properties) using weighting factors, but the weighting factors are hard to determine before the run of model updating. Therefore, the trial-and-error strategy is taken to find the most preferred model among alternative updated models resulted from varying weighting factors. In this study, a new approach to the FE model updating using the multi-objective function (MOF) is proposed to get the most preferred model in a single run of updating without trial-and-error. For the optimization using the MOF, non-dominated sorting genetic algorithm-II (NSGA-II) is employed to find the Pareto optimal front. The bend angle related to the trade-off relationship of objective functions is used to select the most preferred model among the solutions on the Pareto optimal front. To validate the proposed approach, a highway bridge is selected as a test-bed and the modal properties of the bridge are obtained from the ambient vibration test. The initial FE model of the bridge is built using SAP2000. The model is updated using the identified modal properties by the SOF approach with varying the weighting factors and the proposed MOF approach. The most preferred model is selected using the bend angle of the Pareto optimal front, and compared with the results from the SOF approach using varying the weighting factors. The comparison shows that the proposed MOF approach is superior to the SOF approach using varying the weighting factors in getting smaller objective function values, estimating better updated parameters, and taking less computational time.
Atmospheric environment monitoring by the ILAS-II onboard the ADEOS-II satellite
NASA Astrophysics Data System (ADS)
Nakajima, Hideaki; Sugita, Takafumi; Yokota, Tatsuya; Sasano, Yasuhiro
2004-11-01
The Improved Limb Atmospheric Spectrometer-II (ILAS-II) onboard the Advanced Earth Observing Satellite-II (ADEOS-II) was successfully launched on 14 December, 2002 from Japan Aerospace Exploration Agency (JAXA)'s Tanegashima Space Center. ILAS-II is a solar-occultation atmospheric sensor which measures vertical profiles of O3, HNO3, NO2, N2O, CH4, H2O, ClONO2, aerosol extinction coefficients etc. with four grating spectrometers. After the checkout period of the ILAS-II, ILAS-II started its routine operation since 2 April 2003 until 24 October 2003, when ADEOS-II lost its function due to solar-paddle failure. However, about 7 months of data were acquired by ILAS-II including whole period of Antarctic ozone hole in 2003 when ozone depletion was one of the largest up to now. ILAS-II successfully measured vertical profiles of ozone, nitric acid, nitrous oxide, and aerosol extinction coefficients due to Polar Stratospheric Clouds (PSCs) during this ozone hole period. The ILAS-II data with the latest data retrieval algorithm of Version 1.4 shows fairly good agreement with correlative ozonesonde measurements within 15% accuracy.
SLAP lesions: a treatment algorithm.
Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf
2016-02-01
Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair. PMID:26818554
Project resource reallocation algorithm
NASA Technical Reports Server (NTRS)
Myers, J. E.
1981-01-01
A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Efficient Controls for Finitely Convergent Sequential Algorithms
Chen, Wei; Herman, Gabor T.
2010-01-01
Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327
Programming parallel vision algorithms
Shapiro, L.G.
1988-01-01
Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Inclusive jet production using the kt algorithm
Norniella, Olga; /Barcelona, IFAE
2006-05-01
Results on inclusive jet production using the k{sub T} algorithm in proton-antiproton collisions at {radical}s = 1.96 TeV are presented, based on 1 fb{sup -1} of CDF Run II data. The measurements are carried out for jets with p{sub T}{sup jet} > 54 GeV/c in five different jet rapidity regions up to |y{sub jet}| = 2.1. The measured cross sections are corrected to the hadron level and compared to next-to-leading order perturbative QCD predictions (NLO pQCD).
Energy Science and Technology Software Center (ESTSC)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
A star pattern recognition algorithm for autonomous attitude determination
NASA Technical Reports Server (NTRS)
Van Bezooijen, R. W. H.
1990-01-01
The star-pattern recognition algorithm presented allows the advanced Full-sky Autonomous Star Tracker (FAST) device, such as the projected ASTROS II system of the Mariner Mark II planetary spacecraft, to reliably ascertain attitude about all three axes. An ASTROS II-based FAST, possessing an 11.5 x 11.5 deg field of view and 8-arcsec accuracy, can when integrated with an all-sky data base of 4100 guide stars determine its attitude in about 1 sec, with a success rate close to 100 percent. The present recognition algorithm can also be used for automating the acquisition of celestial targets by astronomy telescopes, autonomously updating the attitude of gyro-based attitude control systems, and automating ground-based attitude reconstruction.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Park, Woon Bae; Singh, Satendra Pal; Sohn, Kee-Sun
2014-02-12
Most of the novel phosphors that appear in the literature are either a variant of well-known materials or a hybrid material consisting of well-known materials. This situation has actually led to intellectual property (IP) complications in industry and several lawsuits have been the result. Therefore, the definition of a novel phosphor for use in light-emitting diodes should be clarified. A recent trend in phosphor-related IP applications has been to focus on the novel crystallographic structure, so that a slight composition variance and/or the hybrid of a well-known material would not qualify from either a scientific or an industrial point of view. In our previous studies, we employed a systematic materials discovery strategy combining heuristics optimization and a high-throughput process to secure the discovery of genuinely novel and brilliant phosphors that would be immediately ready for use in light emitting diodes. Despite such an achievement, this strategy requires further refinement to prove its versatility under any circumstance. To accomplish such demands, we improved our discovery strategy by incorporating an elitism-involved nondominated sorting genetic algorithm (NSGA-II) that would guarantee the discovery of truly novel phosphors in the present investigation. Using the improved discovery strategy, we discovered an Eu(2+)-doped AB5X8 (A = Sr or Ba, B = Si and Al, X = O and N) phosphor in an orthorhombic structure (A21am) with lattice parameters a = 9.48461(3) Å, b = 13.47194(6) Å, c = 5.77323(2) Å, α = β = γ = 90°, which cannot be found in any of the existing inorganic compound databases. PMID:24437942
Multi-objective design optimization of the transverse gaseous jet in supersonic flows
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Jun; Yan, Li
2014-01-01
The mixing process between the injectant and the supersonic crossflow is one of the important issues for the design of the scramjet engine, and the efficiency mixing has a great impact on the improvement of the combustion efficiency. A hovering vortex is formed between the separation region and the barrel shock wave, and this may be induced by the large negative density gradient. The separation region provides a good mixing area for the injectant and the subsonic boundary layer. In the current study, the transverse injection flow field with a freestream Mach number of 3.5 has been optimized by the non-dominated sorting genetic algorithm (NSGA II) coupled with the Kriging surrogate model; and the variance analysis method and the extreme difference analysis method have been employed to evaluate the values of the objective functions. The obtained results show that the jet-to-crossflow pressure ratio is the most important design variable for the transverse injection flow field, and the injectant molecular weight and the slot width should be considered for the mixing process between the injectant and the supersonic crossflow. There exists an optimal penetration height for the mixing efficiency, and its value is about 14.3 mm in the range considered in the current study. The larger penetration height provides a larger total pressure loss, and there must be a tradeoff between these two objection functions. In addition, this study demonstrates that the multi-objective design optimization method with the data mining technique can be used efficiently to explore the relationship between the design variables and the objective functions.
Data bank homology search algorithm with linear computation complexity.
Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A
1994-06-01
A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given. PMID:7922689
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1989-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.
NASA Astrophysics Data System (ADS)
Deprit, André; Palacián, Jesúus; Deprit, Etienne
2001-03-01
The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
NASA Astrophysics Data System (ADS)
Nardi, Jerry
The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.
A semisimultaneous inversion algorithm for SAGE III
NASA Astrophysics Data System (ADS)
Ward, Dale M.
2002-12-01
The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766
Developing dataflow algorithms
Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)
1991-01-01
Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.
Evaluating super resolution algorithms
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun
2011-01-01
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.
JPSS CGS Tools For Rapid Algorithm Updates
NASA Astrophysics Data System (ADS)
Smith, D. C.; Grant, K. D.
2011-12-01
The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and
A software tool for graphically assembling damage identification algorithms
NASA Astrophysics Data System (ADS)
Allen, David W.; Clough, Joshua A.; Sohn, Hoon; Farrar, Charles R.
2003-08-01
At Los Alamos National Laboratory (LANL), various algorithms for structural health monitoring problems have been explored in the last 5 to 6 years. The original DIAMOND (Damage Identification And MOdal aNalysis of Data) software was developed as a package of modal analysis tools with some frequency domain damage identification algorithms included. Since the conception of DIAMOND, the Structural Health Monitoring (SHM) paradigm at LANL has been cast in the framework of statistical pattern recognition, promoting data driven damage detection approaches. To reflect this shift and to allow user-friendly analyses of data, a new piece of software, DIAMOND II is under development. The Graphical User Interface (GUI) of the DIAMOND II software is based on the idea of GLASS (Graphical Linking and Assembly of Syntax Structure) technology, which is currently being implemented at LANL. GLASS is a Java based GUI that allows drag and drop construction of algorithms from various categories of existing functions. In the platform of the underlying GLASS technology, DIAMOND II is simply a module specifically targeting damage identification applications. Users can assemble various routines, building their own algorithms or benchmark testing different damage identification approaches without writing a single line of code.
An algorithm for constructing polynomial systems whose solution space characterizes quantum circuits
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Severyanov, Vasily M.
2006-05-01
An algorithm and its first implementation in C# are presented for assembling arbitrary quantum circuits on the base of Hadamard and Toffoli gates and for constructing multivariate polynomial systems over the finite field Z II arising when applying the Feynman's sum-over-paths approach to quantum circuits. The matrix elements determined by a circuit can be computed by counting the number of common roots in Z II for the polynomial system associated with the circuit. To determine the number of solutions in Z II for the output polynomial system, one can use the Grobner bases method and the relevant algorithms for computing Grobner bases.
Design of robust systolic algorithms
Varman, P.J.; Fussell, D.S.
1983-01-01
A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Polynomial Algorithms for Item Matching.
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Jones, Douglas H.
1992-01-01
Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.
2016-05-01
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
Efficient multicomponent fuel algorithm
NASA Astrophysics Data System (ADS)
Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.
2003-03-01
We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.