Science.gov

Sample records for algorithm ii nsga-ii

  1. Design of isolated buildings with S-FBI system subjected to near-fault earthquakes using NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Silwal, B.

    2014-04-01

    This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.

  2. Optimal locations of piezoelectric patches for supersonic flutter control of honeycomb sandwich panels, using the NSGA-II method

    NASA Astrophysics Data System (ADS)

    Nezami, M.; Gholami, B.

    2016-03-01

    The active flutter control of supersonic sandwich panels with regular honeycomb interlayers under impact load excitation is studied using piezoelectric patches. A non-dominated sorting-based multi-objective evolutionary algorithm, called non-dominated sorting genetic algorithm II (NSGA-II) is suggested to find the optimal locations for different numbers of piezoelectric actuator/sensor pairs. Quasi-steady first order supersonic piston theory is employed to define aerodynamic loading and the p-method is applied to find the flutter bounds. Hamilton’s principle in conjunction with the generalized Fourier expansions and Galerkin method are used to develop the dynamical model of the structural systems in the state-space domain. The classical Runge-Kutta time integration algorithm is then used to calculate the open-loop aeroelastic response of the system. The maximum flutter velocity and minimum voltage applied to actuators are calculated according to the optimal locations of piezoelectric patches obtained using the NSGA-II and then the proportional feedback is used to actively suppress the closed loop system response. Finally the control effects, using the two different controllers, are compared.

  3. Multi-objective optimization of process parameters in Electro-Discharge Diamond Face Grinding based on ANN-NSGA-II hybrid technique

    NASA Astrophysics Data System (ADS)

    Yadav, Ravindra Nath; Yadava, Vinod; Singh, G. K.

    2013-09-01

    The effective study of hybrid machining processes (HMPs), in terms of modeling and optimization has always been a challenge to the researchers. The combined approach of Artificial Neural Network (ANN) and Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) has attracted attention of researchers for modeling and optimization of the complex machining processes. In this paper, a hybrid machining process of Electrical Discharge Face Grinding (EDFG) and Diamond Face Grinding (DFG) named as Electrical Discharge Diamond face Grinding (EDDFG) have been studied using a hybrid methodology of ANN-NSGA-II. In this study, ANN has been used for modeling while NSGA-II is used to optimize the control parameters of the EDDFG process. For observations of input-output relations, the experiments were conducted on a self developed face grinding setup, which is attached with the ram of EDM machine. During experimentation, the wheel speed, pulse current, pulse on-time and duty factor are taken as input parameters while output parameters are material removal rate (MRR) and average surface roughness ( R a). The results have shown that the developed ANN model is capable to predict the output responses within the acceptable limit for a given set of input parameters. It has also been found that hybrid approach of ANN-NSGAII gives a set of optimal solutions for getting appropriate value of outputs with multiple objectives.

  4. Multi-objective optimization of weld geometry in hybrid fiber laser-arc butt welding using Kriging model and NSGA-II

    NASA Astrophysics Data System (ADS)

    Gao, Zhongmei; Shao, Xinyu; Jiang, Ping; Wang, Chunming; Zhou, Qi; Cao, Longchao; Wang, Yilin

    2016-06-01

    An integrated multi-objective optimization approach combining Kriging model and non-dominated sorting genetic algorithm-II (NSGA-II) is proposed to predict and optimize weld geometry in hybrid fiber laser-arc welding on 316L stainless steel in this paper. A four-factor, five-level experiment using Taguchi L25 orthogonal array is conducted considering laser power ( P), welding current ( I), distance between laser and arc ( D) and traveling speed ( V). Kriging models are adopted to approximate the relationship between process parameters and weld geometry, namely depth of penetration (DP), bead width (BW) and bead reinforcement (BR). NSGA-II is used for multi-objective optimization taking the constructed Kriging models as objective functions and generates a set of optimal solutions with pareto-optimal front for outputs. Meanwhile, the main effects and the first-order interactions between process parameters are analyzed. Microstructure is also discussed. Verification experiments demonstrate that the optimum values obtained by the proposed integrated Kriging model and NSGA-II approach are in good agreement with experimental results.

  5. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  6. Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II

    PubMed Central

    Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh

    2011-01-01

    Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability. PMID:22606667

  7. ADME Properties Evaluation in Drug Discovery: Prediction of Caco-2 Cell Permeability Using a Combination of NSGA-II and Boosting.

    PubMed

    Wang, Ning-Ning; Dong, Jie; Deng, Yin-Hua; Zhu, Min-Feng; Wen, Ming; Yao, Zhi-Jiang; Lu, Ai-Ping; Wang, Jian-Bing; Cao, Dong-Sheng

    2016-04-25

    The Caco-2 cell monolayer model is a popular surrogate in predicting the in vitro human intestinal permeability of a drug due to its morphological and functional similarity with human enterocytes. A quantitative structure-property relationship (QSPR) study was carried out to predict Caco-2 cell permeability of a large data set consisting of 1272 compounds. Four different methods including multivariate linear regression (MLR), partial least-squares (PLS), support vector machine (SVM) regression and Boosting were employed to build prediction models with 30 molecular descriptors selected by nondominated sorting genetic algorithm-II (NSGA-II). The best Boosting model was obtained finally with R(2) = 0.97, RMSEF = 0.12, Q(2) = 0.83, RMSECV = 0.31 for the training set and RT(2) = 0.81, RMSET = 0.31 for the test set. A series of validation methods were used to assess the robustness and predictive ability of our model according to the OECD principles and then define its applicability domain. Compared with the reported QSAR/QSPR models about Caco-2 cell permeability, our model exhibits certain advantage in database size and prediction accuracy to some extent. Finally, we found that the polar volume, the hydrogen bond donor, the surface area and some other descriptors can influence the Caco-2 permeability to some extent. These results suggest that the proposed model is a good tool for predicting the permeability of drug candidates and to perform virtual screening in the early stage of drug development. PMID:27018227

  8. Optimization of multi-reservoir operation with a new hedging rule: application of fuzzy set theory and NSGA-II

    NASA Astrophysics Data System (ADS)

    Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad

    2016-06-01

    The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.

  9. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    PubMed Central

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  10. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  11. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  12. Multi-objective optimization of lithium-ion battery model using genetic algorithm approach

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Wang, Lixin; Hinds, Gareth; Lyu, Chao; Zheng, Jun; Li, Junfu

    2014-12-01

    A multi-objective parameter identification method for modeling of Li-ion battery performance is presented. Terminal voltage and surface temperature curves at 15 °C and 30 °C are used as four identification objectives. The Pareto fronts of two types of Li-ion battery are obtained using the modified multi-objective genetic algorithm NSGA-II and the final identification results are selected using the multiple criteria decision making method TOPSIS. The simulated data using the final identification results are in good agreement with experimental data under a range of operating conditions. The validation results demonstrate that the modified NSGA-II and TOPSIS algorithms can be used as robust and reliable tools for identifying parameters of multi-physics models for many types of Li-ion batteries.

  13. Multi-objective evolutionary algorithm for operating parallel reservoir system

    NASA Astrophysics Data System (ADS)

    Chang, Li-Chiu; Chang, Fi-John

    2009-10-01

    SummaryThis paper applies a multi-objective evolutionary algorithm, the non-dominated sorting genetic algorithm (NSGA-II), to examine the operations of a multi-reservoir system in Taiwan. The Feitsui and Shihmen reservoirs are the most important water supply reservoirs in Northern Taiwan supplying the domestic and industrial water supply needs for over 7 million residents. A daily operational simulation model is developed to guide the releases of the reservoir system and then to calculate the shortage indices (SI) of both reservoirs over a long-term simulation period. The NSGA-II is used to minimize the SI values through identification of optimal joint operating strategies. Based on a 49 year data set, we demonstrate that better operational strategies would reduce shortage indices for both reservoirs. The results indicate that the NSGA-II provides a promising approach. The pareto-front optimal solutions identified operational compromises for the two reservoirs that would be expected to improve joint operations.

  14. Improved stream temperature simulations within SWAT using NSGA-II for automatic, multi-site calibration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Stream temperature is one of the most influential parameters impacting the survival, growth rates, distribution, and migration patterns of many aquatic organisms. Distributed stream temperature models are crucial for providing insights into variations of stream temperature for regions and time perio...

  15. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  16. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  17. A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Cao, Ruifen; Li, Guoli; Wu, Yican

    Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.

  18. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  19. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  20. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  1. Multi-objective parametric optimization of Inertance type pulse tube refrigerator using response surface methodology and non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.

    2014-07-01

    The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.

  2. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  3. Multiple ant colony algorithm method for selecting tag SNPs.

    PubMed

    Liao, Bo; Li, Xiong; Zhu, Wen; Li, Renfa; Wang, Shulin

    2012-10-01

    The search for the association between complex disease and single nucleotide polymorphisms (SNPs) or haplotypes has recently received great attention. Finding a set of tag SNPs for haplotyping in a great number of samples is an important step to reduce cost for association study. Therefore, it is essential to select tag SNPs with more efficient algorithms. In this paper, we model problem of selection tag SNPs by MINIMUM TEST SET and use multiple ant colony algorithm (MACA) to search a smaller set of tag SNPs for haplotyping. The various experimental results on various datasets show that the running time of our method is less than GTagger and MLR. And MACA can find the most representative SNPs for haplotyping, so that MACA is more stable and the number of tag SNPs is also smaller than other evolutionary methods (like GTagger and NSGA-II). Our software is available upon request to the corresponding author. PMID:22480582

  4. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  5. A non-dominated sorting genetic algorithm for a bi-objective pick-up and delivery problem

    NASA Astrophysics Data System (ADS)

    Velasco, N.; Dejax, P.; Guéret, C.; Prins, C.

    2012-03-01

    Some companies must transport their personnel within facilities. This is especially the case for oil companies that use helicopters to transport engineers, technicians and assistant personnel from platform to platform. This operation has the potential to become expensive if the transportation routes are not correctly planned and provide a bad quality of service. Here this issue is modelled as a pick-up and delivery problem where a set of transportation requests should be scheduled in routes, minimizing the total transportation cost while the most urgent requests are satisfied by priority. To solve the problem, a method based on a Non-dominated Sorting Genetic Algorithm (NSGA-II) is proposed. This algorithm is tested on both randomly generated and real instances provided by a petroleum company. The results show that the proposed algorithm improves the best-known solutions.

  6. Multicomponent, multi-azimuth pre-stack seismic waveform inversion for azimuthally anisotropic media using a parallel and computationally efficient non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Tao; Mallick, Subhashis

    2015-02-01

    Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying

  7. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  8. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    PubMed Central

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257

  9. A hybrid multi-objective particle swarm algorithm for a mixed-model assembly line sequencing problem

    NASA Astrophysics Data System (ADS)

    Rahimi-Vahed, A. R.; Mirghorbani, S. M.; Rabbani, M.

    2007-12-01

    Mixed-model assembly line sequencing is one of the most important strategic problems in the field of production management where diversified customers' demands exist. In this article, three major goals are considered: (i) total utility work, (ii) total production rate variation and (iii) total setup cost. Due to the complexity of the problem, a hybrid multi-objective algorithm based on particle swarm optimization (PSO) and tabu search (TS) is devised to obtain the locally Pareto-optimal frontier where simultaneous minimization of the above-mentioned objectives is desired. In order to validate the performance of the proposed algorithm in terms of solution quality and diversity level, the algorithm is applied to various test problems and its reliability, based on different comparison metrics, is compared with three prominent multi-objective genetic algorithms, PS-NC GA, NSGA-II and SPEA-II. The computational results show that the proposed hybrid algorithm significantly outperforms existing genetic algorithms in large-sized problems.

  10. Multi-objective Job Shop Rescheduling with Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Xinchang; Gen, Mitsuo

    In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).

  11. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  12. [Not Available].

    PubMed

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  13. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187

  14. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  15. On the use of multi-algorithm, genetically adaptive multi-objective method for multi-site calibration of the SWAT model

    SciTech Connect

    Zhang, Xuesong; Srinivasan, Raghavan; Van Liew, M.

    2010-04-15

    With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi-objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi-objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi-algorithm, genetically adaptive multi-objective method (AMALGAM) for multi-site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi-objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non-dominated Sorted Genetic Algorithm II (NSGA-II)). In order to provide insights into each method’s overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi-method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi-site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multiobjective optimization algorithms and multi-mode search operators into AMALGAM deserves further research.

  16. Algorithmic Questions for Linear Algebraic Groups. Ii

    NASA Astrophysics Data System (ADS)

    Sarkisjan, R. A.

    1982-04-01

    It is proved that, given a linear algebraic group defined over an algebraic number field and satisfying certain conditions, there exists an algorithm which determines whether or not two double cosets of a special type coincide in its adele group, and which enumerates all such double cosets. This result is applied to the isomorphism problem for finitely generated nilpotent groups, and also to other problems.Bibliography: 18 titles.

  17. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  18. SAGE Version 7.0 Algorithm: Application to SAGE II

    NASA Technical Reports Server (NTRS)

    Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.

    2013-01-01

    This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.

  19. Proposal of Functional-Specialization Multi-Objective Real-Coded Genetic Algorithm: FS-MOGA

    NASA Astrophysics Data System (ADS)

    Hamada, Naoki; Tanaka, Masaharu; Sakuma, Jun; Kobayashi, Shigenobu; Ono, Isao

    This paper presents a Genetic Algorithm (GA) for multi-objective function optimization. To find a precise and widely-distributed set of solutions in difficult multi-objective function optimization problems which have multimodality and curved Pareto-optimal set, a GA would be required conflicting behaviors in the early stage and the last stage of search. That is, in the early stage of search, GA should perform local-Pareto-optima-overcoming search which aims to overcome local Pareto-optima and converge the population to promising areas in the decision variable space. On the other hand, in the last stage of search, GA should perform Pareto-frontier-covering search which aims to spread the population along the Pareto-optimal set. NSGA-II and SPEA2, the most widely used conventional methods, have problems in local-Pareto-optima-overcoming and Pareto-frontier-covering search. In local-Pareto-optima-overcoming search, their selection pressure is too high to maintain the diversity for overcoming local Pareto-optima. In Pareto-frontier-covering search, their abilities of extrapolation-directed sampling are not enough to spread the population and they cannot sample along the Pareto-optimal set properly. To resolve above problems, the proposed method adaptively switches two search strategies, each of which is specialized for local-Pareto-optima-overcoming and Pareto-frontier-covering search, respectively. We examine the effectiveness of the proposed method using two benchmark problems. The experimental results show that our approach outperforms the conventional methods in terms of both local-Pareto-optima-overcoming and Pareto-frontier-covering search.

  20. Nios II hardware acceleration of the epsilon quadratic sieve algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Botella, Guillermo; Castillo, Encarnacion; García, Antonio

    2010-04-01

    The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA acceleration. Our new ɛ-QS algorithm further simplifies the hardware architecture making it an even better candidate for C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP) and QS architecture choices for factoring 32-2048 bit RSA numbers.

  1. Tracking at CDF: algorithms and experience from Run I and Run II

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-10-01

    The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.

  2. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  3. Iterative phase retrieval algorithms. Part II: Attacking optical encryption systems.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    The modified iterative phase retrieval algorithms developed in Part I [Guo et al., Appl. Opt.54, 4698 (2015)] are applied to perform known plaintext and ciphertext attacks on amplitude encoding and phase encoding Fourier-transform-based double random phase encryption (DRPE) systems. It is shown that the new algorithms can retrieve the two random phase keys (RPKs) perfectly. The performances of the algorithms are tested by using the retrieved RPKs to decrypt a set of different ciphertexts encrypted using the same RPKs. Significantly, it is also shown that the DRPE system is, under certain conditions, vulnerable to ciphertext-only attack, i.e., in some cases an attacker can decrypt DRPE data successfully when only the ciphertext is intercepted. PMID:26192505

  4. Optimisation in radiotherapy. II: Programmed and inversion optimisation algorithms.

    PubMed

    Ebert, M

    1997-12-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered--those associated with mathematical programming which employ specific search techniques, linear programming-type searches or artificial intelligence--and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. PMID:9503694

  5. Incremental refinement of a multi-user-detection algorithm (II)

    NASA Astrophysics Data System (ADS)

    Vollmer, M.; Götze, J.

    2003-05-01

    Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.

  6. Measurement of the inclusive jet cross section using the midpoint algorithm in Run II at CDF

    SciTech Connect

    Group, Robert Craig; /Florida U.

    2006-12-01

    A measurement is presented of the inclusive jet cross section using the Midpoint jet clustering algorithm in five different rapidity regions. This is the first analysis which measures the inclusive jet cross section using the Midpoint algorithm in the forward region of the detector. The measurement is based on more than 1 fb{sup -1} of integrated luminosity of Run II data taken by the CDF experiment at the Fermi National Accelerator Laboratory. The results are consistent with the predictions of perturbative quantum chromodynamics.

  7. Beam size and position measurement based on logarithm processing algorithm in HLS II

    NASA Astrophysics Data System (ADS)

    Chao-Cai, Cheng; Bao-Gen, Sun; Yong-Liang, Yang; Ze-Ran, Zhou; Ping, Lu; Fang-Fang, Wu; Ji-Gang, Wang; Kai, Tang; Qing, Luo; Hao, Li; Jia-Jun, Zheng; Qing-Ming, Duan

    2016-04-01

    A logarithm processing algorithm to measure beam transverse size and position is proposed and preliminary experimental results in Hefei Light Source II (HLS II) are given. The algorithm is based on only 4 successive channels of 16 anode channels of multianode photomultiplier tube (MAPMT) R5900U-00-L16, which has typical rise time of 0.6 ns and effective area of 0.8×16 mm for a single anode channel. In the paper, we first elaborate the simulation results of the algorithm with and without channel inconsistency. Then we calibrate the channel inconsistency and verify the algorithm using a general current signal processor Libera Photon in a low-speed scheme. Finally we get turn-by-turn beam size and position and calculate the vertical tune in a high-speed scheme. The experimental results show that measured values fit well with simulation results after channel differences are calibrated, and the fractional part of the tune in vertical direction is 0.3628, which is very close to the nominal value 0.3621. Supported by National Natural Science Foundation of China (11005105, 11175173)

  8. Multi-Objective Genetic Programming with Redundancy-Regulations for Automatic Construction of Image Feature Extractors

    NASA Astrophysics Data System (ADS)

    Watchareeruetai, Ukrit; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Kudo, Hiroaki; Ohnishi, Noboru

    We propose a new multi-objective genetic programming (MOGP) for automatic construction of image feature extraction programs (FEPs). The proposed method was originated from a well known multi-objective evolutionary algorithm (MOEA), i.e., NSGA-II. The key differences are that redundancy-regulation mechanisms are applied in three main processes of the MOGP, i.e., population truncation, sampling, and offspring generation, to improve population diversity as well as convergence rate. Experimental results indicate that the proposed MOGP-based FEP construction system outperforms the two conventional MOEAs (i.e., NSGA-II and SPEA2) for a test problem. Moreover, we compared the programs constructed by the proposed MOGP with four human-designed object recognition programs. The results show that the constructed programs are better than two human-designed methods and are comparable with the other two human-designed methods for the test problem.

  9. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  10. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  11. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  12. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here

  13. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.

  14. Experimental analysis and mathematical prediction of Cd(II) removal by biosorption using support vector machines and genetic algorithms.

    PubMed

    Hlihor, Raluca Maria; Diaconu, Mariana; Leon, Florin; Curteanu, Silvia; Tavares, Teresa; Gavrilescu, Maria

    2015-05-25

    We investigated the bioremoval of Cd(II) in batch mode, using dead and living biomass of Trichoderma viride. Kinetic studies revealed three distinct stages of the biosorption process. The pseudo-second order model and the Langmuir model described well the kinetics and equilibrium of the biosorption process, with a determination coefficient, R(2)>0.99. The value of the mean free energy of adsorption, E, is less than 16 kJ/mol at 25 °C, suggesting that, at low temperature, the dominant process involved in Cd(II) biosorption by dead T. viride is the chemical ion-exchange. With the temperature increasing to 40-50 °C, E values are above 16 kJ/mol, showing that the particle diffusion mechanism could play an important role in Cd(II) biosorption. The studies on T. viride growth in Cd(II) solutions and its bioaccumulation performance showed that the living biomass was able to bioaccumulate 100% Cd(II) from a 50 mg/L solution at pH 6.0. The influence of pH, biomass dosage, metal concentration, contact time and temperature on the bioremoval efficiency was evaluated to further assess the biosorption capability of the dead biosorbent. These complex influences were correlated by means of a modeling procedure consisting in data driven approach in which the principles of artificial intelligence were applied with the help of support vector machines (SVM), combined with genetic algorithms (GA). According to our data, the optimal working conditions for the removal of 98.91% Cd(II) by T. viride were found for an aqueous solution containing 26.11 mg/L Cd(II) as follows: pH 6.0, contact time of 3833 min, 8 g/L biosorbent, temperature 46.5 °C. The complete characterization of bioremoval parameters indicates that T. viride is an excellent material to treat wastewater containing low concentrations of metal. PMID:25224921

  15. Geophysical inversion with a neighbourhood algorithm-II. Appraising the ensemble

    NASA Astrophysics Data System (ADS)

    Sambridge, Malcolm

    1999-09-01

    Monte Carlo direct search methods, such as genetic algorithms, simulated annealing, etc., are often used to explore a finite-dimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information' collected in the search process. Search techniques have been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble. Often inferences are based on only a small subset of the ensemble, and sometimes a single member. This paper presents a new approach to the appraisal problem. To our knowledge this is the first time the general case has been addressed, that is, how to infer information from a complete ensemble, previously generated by any search method. The essence of the new approach is to use the information in the available ensemble to guide a resampling of the parameter space. This requires no further solving of the forward problem, but from the new `resampled' ensemble we are able to obtain measures of resolution and trade-off in the model parameters, or any combinations of them. The new ensemble inference algorithm is illustrated on a highly non-linear wave-form inversion problem. It is shown how the computation time and memory requirements scale with the dimension of the parameter space and size of the ensemble. The method is highly parallel, and may easily be distributed across several computers. Since little is assumed about the initial ensemble of earth models, the technique is applicable to a wide variety of situations. For example, it may be applied to perform `error analysis' using the ensemble generated by a genetic algorithm, or any other direct search method.

  16. Combinatorial theory of the semiclassical evaluation of transport moments II: Algorithmic approach for moment generating functions

    SciTech Connect

    Berkolaiko, G.; Kuipers, J.

    2013-12-15

    Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.

  17. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  18. Noise characterization of block-iterative reconstruction algorithms: II. Monte Carlo simulations.

    PubMed

    Soares, Edward J; Glick, Stephen J; Hoppin, John W

    2005-01-01

    In Soares et al. (2000), the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART) were derived. Included in this analysis were the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. Explicit expressions were found for the ensemble mean, covariance matrix, and probability density function of RBI reconstructed images, as a function of iteration number. The theoretical formulations relied on one approximation, namely that the noise in the reconstructed image was small compared to the mean image. In this paper, we evaluate the predictions of the theory by using Monte Carlo methods to calculate the sample statistical properties of each algorithm and then compare the results with the theoretical formulations. In addition, the validity of the approximation will be justified. PMID:15638190

  19. Parallel Algorithms and Software for Nuclear, Energy, and Environmental Applications. Part II: Multiphysics Software

    SciTech Connect

    Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson

    2012-09-01

    This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.

  20. Graph Theoretic Foundations of Multibody Dynamics Part II: Analysis and Algorithms.

    PubMed

    Jain, Abhinandan

    2011-10-01

    This second, of a two part paper, uses concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The first part [7] established the block-weighted adjacency (BWA) matrix structure of spatial operators associated with serial and tree topology multibody system dynamics, and introduced the notions of spatial kernel operators (SKO) and spatial propagation operators (SPO). This paper builds upon these connections to show that key analytical results and computational algorithms are a direct consequence of these structural properties and require minimal assumptions about the specific nature of the underlying multibody system. We formalize this notion by introducing the notion of SKO models for general tree-topology multibody systems. We show that key analytical results, including mass matrix factorization, inversion, and decomposition hold for all SKO models. It is also shown that key low-order scatter/gather recursive computational algorithms follow directly from these abstract-level analytical results. Application examples to illustrate the concrete application of these general results are provided. The paper also describes a general recipe for developing SKO models. The abstract nature of SKO models allows the application of these techniques to a very broad class of multibody systems. PMID:22102791

  1. Measurement of the top quark mass in the dilepton channel using the neutrino weighting algorithm at CDF II

    NASA Astrophysics Data System (ADS)

    Sabik, Simon

    We measure the top quark mass using approximately 359 pb-1 of data from pp¯ collisions at s = 1.96 GeV at CDF Run II. We select tt¯ candidates that are consistent with two W bosons decaying to a charged lepton and a neutrino following tt¯ → W+W-bb¯ → l+l- nn¯ bb¯. Only one of the two charged leptons is required to be identified as an electron or a muon candidate, while the other is simply a well measured track. We use a neutrino weighting algorithm which weighs each possibility of neutrino direction to reconstruct a top quark mass in each event. We compare the resulting distribution to Monte Carlo templates to obtain a top quark mass of 170.8+6.9-6.5 (stat) +/- 4.6 (syst) GeV/c 2.

  2. Multi-objective optimization of discrete time-cost tradeoff problem in project networks using non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahriari, Mohammadreza

    2016-03-01

    The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.

  3. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments. PMID:2613721

  4. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. PMID:27107954

  5. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management. PMID:25622333

  6. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  7. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    SciTech Connect

    Sako, Masao; Bassett, Bruce; Becker, Andrew; Cinabro, David; DeJongh, Don Frederic; Depoy, D.L.; Doi, Mamoru; Garnavich, Peter M.; Craig, Hogan, J.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Marriner, John; Miknaitis, Gajus; Nichol, Robert C.; Prieto, Jose Luis; Richmond, Michael W.; Schneider, Donald P.; Smith, Mathew; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo U. /Apache Point Observ. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Tokyo U. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ.

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  8. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  9. Dimension reduction of decision variables for multireservoir operation: A spectral optimization model

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian

    2016-01-01

    Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.

  10. A conflict-resolution model for the conjunctive use of surface and groundwater resources that considers water-quality issues: a case study.

    PubMed

    Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas

    2009-03-01

    The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area. PMID:18773238

  11. A Conflict-Resolution Model for the Conjunctive Use of Surface and Groundwater Resources that Considers Water-Quality Issues: A Case Study

    NASA Astrophysics Data System (ADS)

    Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas

    2009-03-01

    The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area.

  12. Blind decorrelation and deconvolution algorithm for multiple-input multiple-output system: II. Analysis and simulation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.

    1999-11-01

    For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.

  13. Developement of a same-side kaon tagging algorithm of B^0_s decays for measuring delta m_s at CDF II

    SciTech Connect

    Menzemer, Stephanie; /Heidelberg U.

    2006-06-01

    The authors developed a Same-Side Kaon Tagging algorithm to determine the production flavor of B{sub s}{sup 0} mesons. Until the B{sub s}{sup 0} mixing frequency is clearly observed the performance of the Same-Side Kaon Tagging algorithm can not be measured on data but has to be determined on Monte Carlo simulation. Data and Monte Carlo agreement has been evaluated for both the B{sub s}{sup 0} and the high statistics B{sup +} and B{sup 0} modes. Extensive systematic studies were performed to quantify potential discrepancies between data and Monte Carlo. The final optimized tagging algorithm exploits the particle identification capability of the CDF II detector. it achieves a tagging performance of {epsilon}D{sup 2} = 4.0{sub -1.2}{sup +0.9} on the B{sub s}{sup 0} {yields} D{sub s}{sup -} {pi}{sup +} sample. The Same-Side Kaon Tagging algorithm presented here has been applied to the ongoing B{sub s}{sup 0} mixing analysis, and has provided a factor of 3-4 increase in the effective statistical size of the sample. This improvement results in the first direct measurement of the B{sub s}{sup 0} mixing frequency.

  14. Optimization of Process Parameters of Hybrid Laser-Arc Welding onto 316L Using Ensemble of Metamodels

    NASA Astrophysics Data System (ADS)

    Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin

    2016-04-01

    Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters (i.e., laser power (P), welding current (A), distance between laser and arc (D), and welding speed (V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.

  15. Optimization of Process Parameters of Hybrid Laser-Arc Welding onto 316L Using Ensemble of Metamodels

    NASA Astrophysics Data System (ADS)

    Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin

    2016-08-01

    Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters ( i.e., laser power ( P), welding current ( A), distance between laser and arc ( D), and welding speed ( V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.

  16. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  17. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  19. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    SciTech Connect

    Stankovski, Z.

    1995-12-31

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors.

  20. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGESBeta

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  1. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  2. [Algorithm for estimating chlorophyll-a concentration in case II water body based on bio-optical model].

    PubMed

    Yang, Wei; Chen, Jin; Mausushita, Bunki

    2009-01-01

    In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model. PMID:19385201

  3. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  4. Algorithm for evaluation of temperature distribution of a vapor cell in a diode-pumped alkali laser system (part II).

    PubMed

    Han, Juhong; Wang, You; Cai, He; An, Guofei; Zhang, Wei; Xue, Liangping; Wang, Hongyuan; Zhou, Jie; Jiang, Zhigang; Gao, Ming

    2015-04-01

    With high efficiency and small thermally-induced effects in the near-infrared wavelength region, a diode-pumped alkali laser (DPAL) is regarded as combining the major advantages of solid-state lasers and gas-state lasers and obviating their main disadvantages at the same time. Studying the temperature distribution in the cross-section of an alkali-vapor cell is critical to realize high-powered DPAL systems for both static and flowing states. In this report, a theoretical algorithm has been built to investigate the features of a flowing-gas DPAL system by uniting procedures in kinetics, heat transfer, and fluid dynamic together. The thermal features and output characteristics have been simultaneously obtained for different gas velocities. The results have demonstrated the great potential of DPALs in the extremely high-powered laser operation. PMID:25968778

  5. Using the Iterative Input variable Selection (IIS) algorithm to assess the relevance of ENSO teleconnections patterns on hydro-meteorological processes at the catchment scale

    NASA Astrophysics Data System (ADS)

    Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea

    2014-05-01

    Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.

  6. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  7. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  8. Quantifying tradeoffs between water availability, water quality, food production and bioenergy production in a Central German Catchment

    NASA Astrophysics Data System (ADS)

    Volk, M.; Lautenbach, S.; Strauch, M.; Whittaker, G. W.

    2012-04-01

    Worldwide increasing bioenergy production is on the political agenda. It is well known that bioenergy production comes at a cost - several trade-offs with food production, water quality and quantity issues, biodiversity and ecosystem services are known. However, a quantification of these trade-offs is still missing. Hence, our study presents an analysis of trade-offs between water availability, water quality, bioenergy production and production in a Central German agricultural catchment. Our analysis is based on using SWAT and a multi-objective genetic algorithm (NSGA II). The genetic algorithm is used to find Pareto optimal configurations of crop rotation schemes. The Pareto-optimality describes solutions in which an objective cannot be improved without decreasing other objectives. This allows us to quantify the costs at which several levels of increase in bioenergy production come and to derive recommendations for policy makers.

  9. Time-response-based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Goldvard, Alex; Salomon, Shaul

    2015-04-01

    Solutions to engineering problems are often evaluated by considering their time responses; thus, each solution is associated with a function. To avoid optimizing the functions, such optimization is usually carried out by setting auxiliary objectives (e.g. minimal overshoot). Therefore, in order to find different optimal solutions, alternative auxiliary optimization objectives may have to be defined prior to optimization. In the current study, a new approach is suggested that avoids the need to define auxiliary objectives. An algorithm is suggested that enables the optimization of solutions according to their transient behaviours. For this optimization, the functions are sampled and the problem is posed as a multi-objective problem. The recently introduced algorithm NSGA-II-PSA is adopted and tailored to solve it. Mathematical as well as engineering problems are utilized to explain and demonstrate the approach and its applicability to real life problems. The results highlight the advantages of avoiding the definition of artificial objectives.

  10. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  11. Multi-Disciplinary Design Optimization of Hypersonic Air-Breathing Vehicle

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Tang, Zhili; Sheng, Jianda

    2016-06-01

    A 2D hypersonic vehicle shape with an idealized scramjet is designed at a cruise regime: Mach number (Ma) = 8.0, Angle of attack (AOA) = 0 deg and altitude (H) = 30kms. Then a multi-objective design optimization of the 2D vehicle is carried out by using a Pareto Non-dominated Sorting Genetic Algorithm II (NSGA-II). In the optimization process, the flow around the air-breathing vehicle is simulated by inviscid Euler equations using FLUENT software and the combustion in the combustor is modeled by a methodology based on the well known combination effects of area-varying pipe flow and heat transfer pipe flow. Optimization results reveal tradeoffs among total pressure recovery coefficient of forebody, lift to drag ratio of vehicle, specific impulse of scramjet engine and the maximum temperature on the surface of vehicle.

  12. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-06-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  13. Explore the impacts of river flow and quality on biodiversity for water resources management by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu

    2016-04-01

    Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non

  14. Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation

    NASA Astrophysics Data System (ADS)

    Cheng, C. L.

    2015-12-01

    Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation Chung-Lien Cheng, Wen-Ping Tsai, Fi-John Chang* Department of Bioenvironmental Systems Engineering, National Taiwan University, Da-An District, Taipei 10617, Taiwan, ROC.Corresponding author: Fi-John Chang (changfj@ntu.edu.tw) AbstractIn Taiwan, the population growth and economic development has led to considerable and increasing demands for natural water resources in the last decades. Under such condition, water shortage problems have frequently occurred in northern Taiwan in recent years such that water is usually transferred from irrigation sectors to public sectors during drought periods. Facing the uneven spatial and temporal distribution of water resources and the problems of increasing water shortages, it is a primary and critical issue to simultaneously satisfy multiple water uses through adequate reservoir operations for sustainable water resources management. Therefore, we intend to build an intelligent reservoir operation system for the assessment of agricultural water resources management strategy in response to food security during drought periods. This study first uses the grey system to forecast the agricultural water demand during February and April for assessing future agricultural water demands. In the second part, we build an intelligent water resources system by using the non-dominated sorting genetic algorithm-II (NSGA-II), an optimization tool, for searching the water allocation series based on different water demand scenarios created from the first part to optimize the water supply operation for different water sectors. The results can be a reference guide for adequate agricultural water resources management during drought periods. Keywords: Non-dominated sorting genetic algorithm-II (NSGA-II); Grey System; Optimization; Agricultural Water Resources Management.

  15. Multi-objective optimization for deepwater dynamic umbilical installation analysis

    NASA Astrophysics Data System (ADS)

    Yang, HeZhen; Wang, AiJun; Li, HuaJun

    2012-08-01

    We suggest a method of multi-objective optimization based on approximation model for dynamic umbilical installation. The optimization aims to find out the most cost effective size, quantity and location of buoyancy modules for umbilical installation while maintaining structural safety. The approximation model is constructed by the design of experiment (DOE) sampling and is utilized to solve the problem of time-consuming analyses. The non-linear dynamic analyses considering environmental loadings are executed on these sample points from DOE. Non-dominated Sorting Genetic Algorithm (NSGA-II) is employed to obtain the Pareto solution set through an evolutionary optimization process. Intuitionist fuzzy set theory is applied for selecting the best compromise solution from Pareto set. The optimization results indicate this optimization strategy with approximation model and multiple attribute decision-making method is valid, and provide the optimal deployment method for deepwater dynamic umbilical buoyancy modules.

  16. GEOFIM: A WebGIS application for integrated geophysical modeling in active volcanic regions

    NASA Astrophysics Data System (ADS)

    Currenti, Gilda; Napoli, Rosalba; Sicali, Antonino; Greco, Filippo; Negro, Ciro Del

    2014-09-01

    We present GEOFIM (GEOphysical Forward/Inverse Modeling), a WebGIS application for integrated interpretation of multiparametric geophysical observations. It has been developed to jointly interpret scalar and vector magnetic data, gravity data, as well as geodetic data, from GPS, tiltmeter, strainmeter and InSAR observations, recorded in active volcanic areas. GEOFIM gathers a library of analytical solutions, which provides an estimate of the geophysical signals due to perturbations in the thermal and stress state of the volcano. The integrated geophysical modeling can be performed by a simple trial and errors forward modeling or by an inversion procedure based on NSGA-II algorithm. The software capability was tested on the multiparametric data set recorded during the 2008-2009 Etna flank eruption onset. The results encourage to exploit this approach to develop a near-real-time warning system for a quantitative model-based assessment of geophysical observations in areas where different parameters are routinely monitored.

  17. A parametric optimization procedure for the suction system of reciprocating compressors

    NASA Astrophysics Data System (ADS)

    Ferreira, W. M.; Silva, E.; Deschamps, C. J.

    2015-08-01

    The design of the suction system of compressors is of fundamental importance for efficiency and reliability. This paper reports a method developed to optimize the suction system of a reciprocating compressor, by using the genetic algorithm NSGA-II. The isentropic and volumetric efficiencies are used as objective functions, while the bending fatigue stress is used as a constraint to meet valve reliability. A simulation model of the compression cycle was coupled to the optimization procedure, with correlations for flow and force effective areas in terms of geometric parameters of the suction valve. Valve dynamics was numerically solved via the finite element method. The proposed optimization procedure was applied to a reciprocating compressor adopted for household refrigeration, identifying suction system geometries more efficient than the original design.

  18. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  19. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  20. A multi-stakeholder framework for urban runoff quality management: Application of social choice and bargaining techniques.

    PubMed

    Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra

    2016-04-15

    In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume. PMID:26849322

  1. Detecting multiple periodicities in observational data with the multifrequency periodogram—II. Frequency Decomposer, a parallelized time-series analysis algorithm

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-11-01

    This is a parallelized algorithm performing a decomposition of a noisy time series into a number of sinusoidal components. The algorithm analyses all suspicious periodicities that can be revealed, including the ones that look like an alias or noise at a glance, but later may prove to be a real variation. After the selection of the initial candidates, the algorithm performs a complete pass through all their possible combinations and computes the rigorous multifrequency statistical significance for each such frequency tuple. The largest combinations that still survived this thresholding procedure represent the outcome of the analysis.

  2. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  3. Multiobjective training of artificial neural networks for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    de Vos, N. J.; Rientjes, T. H. M.

    2008-08-01

    This paper presents results on the application of various optimization algorithms for the training of artificial neural network rainfall-runoff models. Multilayered feed-forward networks for forecasting discharge from two mesoscale catchments in different climatic regions have been developed for this purpose. The performances of the multiobjective algorithms Multi Objective Shuffled Complex Evolution Metropolis-University of Arizona (MOSCEM-UA) and Nondominated Sorting Genetic Algorithm II (NSGA-II) have been compared to the single-objective Levenberg-Marquardt and Genetic Algorithm for training of these models. Performance has been evaluated by means of a number of commonly applied objective functions and also by investigating the internal weights of the networks. Additionally, the effectiveness of a new objective function called mean squared derivative error, which penalizes models for timing errors and noisy signals, has been explored. The results show that the multiobjective algorithms give competitive results compared to the single-objective ones. Performance measures and posterior weight distributions of the various algorithms suggest that multiobjective algorithms are more consistent in finding good optima than are single-objective algorithms. However, results also show that it is difficult to conclude if any of the algorithms is superior in terms of accuracy, consistency, and reliability. Besides the training algorithm, network performance is also shown to be sensitive to the choice of objective function(s), and including more than one objective function proves to be helpful in constraining the neural network training.

  4. A COMPUTATIONALLY-BASED IDENTIFICATION ALGORITHM FOR POTENTIAL ESTROGEN RECEPTOR LIGANDS, PART II. AN EVALUATION OF A HUMAN RECEPTOR-BASED MODEL

    EPA Science Inventory

    The objective of this study was to evaluate the capability of an expert system described in the previous paper (Bradbury et al., 2000; Toxicol. Sci.) to identify the potential for chemicals to act as ligands of mammalian estrogen receptors (ERs). The basis of that algorithm was a...

  5. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  7. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  8. Evaluation of the applicability of nonlinear programming algorithms to a typical commercial process flow-sheeting simulator (Volumes I and II)

    SciTech Connect

    Richard, M.J.

    1987-01-01

    An efficient methodology for using commercial flowsheeting programs with advanced mathematical programming algorithms was developed for the optimization of operating plants. The methodology was demonstrated and validated using ChemShare Corporation's DESIGN/2000 simulation of the Freeport Chemical Company's plant for sulfuric acid manufacture and three nonlinear programming techniques: successive linear programming, successive quadratic programming, and the generalized reduced-gradient method. The application of this methodology begins with the development of a feasible base-case simulation. Partial derivatives of the economic model and constraint equations are computed using fully converged simulations. This information is used to formulate an optimization problem that can be solved with the NLP algorithms giving improved values of the economic model. A line search is constructed through the point found from the nonlinear programming algorithm to find the best feasible point to repeat the procedure. The procedure is repeated using the ChemShare simulation program and the NLP code until convergence criteria are met. This method was applied to three flowsheeting problems; a plant-scale-contact sulfuric acid process model, a packed-bed-reactor design model, and an adiabatic-flash problem.

  9. Coastal aquifer management based on surrogate models and multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    is capable of solving complex multi-objective optimization problems effectively with significant reduction in computational time compared to previous methods (it requires only 5% of the NSGA -II algorithm time). Further, as indicated in the figure below, the Pareto solution obtained by the much faster MOSA(MNN) algorithm, is better than the solution obtained by the NSGA-II algorithm.

  10. Measurement of the Inclusive Jet Cross Section using the k(T) algorithm in p anti-p collisions at s**(1/2) = 1.96-TeV with the CDF II Detector

    SciTech Connect

    Abulencia, A.; Adelman, J.; Affolder, Anthony Allen; Akimoto, T.; Albrow, Michael G.; Ambrose, D.; Amerio, S.; Amidei, Dante E.; Anastassov, A.; Anikeev, Konstantin; Annovi, A.; /Frascati /Comenius U.

    2007-01-01

    The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.

  11. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  12. Time Variant Floating Mean Counting Algorithm

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  13. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  14. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  15. Optimization of the Coverage and Accuracy of an Indoor Positioning System with a Variable Number of Sensors

    PubMed Central

    Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Bravo, Ignacio; Gardel, Alfredo; Rodriguez, David

    2016-01-01

    This paper focuses on optimal sensor deployment for indoor localization with a multi-objective evolutionary algorithm. Our goal is to obtain an algorithm to deploy sensors taking the number of sensors, accuracy and coverage into account. Contrary to most works in the literature, we consider the presence of obstacles in the region of interest (ROI) that can cause occlusions between the target and some sensors. In addition, we aim to obtain all of the Pareto optimal solutions regarding the number of sensors, coverage and accuracy. To deal with a variable number of sensors, we add speciation and structural mutations to the well-known non-dominated sorting genetic algorithm (NSGA-II). Speciation allows one to keep the evolution of sensor sets under control and to apply genetic operators to them so that they compete with other sets of the same size. We show some case studies of the sensor placement of an infrared range-difference indoor positioning system with a fairly complex model of the error of the measurements. The results obtained by our algorithm are compared to sensor placement patterns obtained with random deployment to highlight the relevance of using such a deployment algorithm. PMID:27338414

  16. A Multiobjective Approach to Homography Estimation.

    PubMed

    Osuna-Enciso, Valentín; Cuevas, Erik; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel

    2016-01-01

    In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532

  17. A Multiobjective Approach to Homography Estimation

    PubMed Central

    Osuna-Enciso, Valentín; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel

    2016-01-01

    In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532

  18. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc. PMID:26441452

  19. Optimisation of Shape Parameters and Process Manufacturing for an Automotive Safety Part

    NASA Astrophysics Data System (ADS)

    Gildemyn, Eric; Dal Santo, Philippe; Potiron, Alain; Saïdane, Delphine

    2007-05-01

    In recent years, the weight and the cost of automotive vehicles have considerably increased due to the importance devoted to safety systems. It is therefore necessary to reduce the weight and the production cost of components by improving their shape and manufacturing process. This work deals with a numerical approach for optimizing the manufacturing process parameters of a safety belt anchor using a genetic algorithm (NSGA II). This type of component is typically manufactured in three stages: blanking, rounding of the edges by punching and finally, bending with a 90° angle. In this study, only the rounding and the bending will be treated. The numerical model is linked to the genetic algorithm in order to optimize the process parameters. This is implemented by using ABAQUSscript files developed in the Python programming language. The algorithm modifies the script files and restarts the FEM analysis automatically. Lemaitre's damage model is introduced in the material behaviour laws and implemented in the FEM analysis by using a FORTRAN subroutine. The influence of two process parameters (die radius and the rounding punch radius) and five shape parameters were investigated. The objective functions are (i) the material damage state at the end of the forming process, (ii) the stress field and (iii) the maximum Von Mises stress in the folded zone.

  20. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  1. An archived multi-objective simulated annealing for a dynamic cellular manufacturing system

    NASA Astrophysics Data System (ADS)

    Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza

    2014-05-01

    To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.

  2. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  3. A game theoretic approach for trading discharge permits in rivers.

    PubMed

    Niksokhan, Mohammad Hossein; Kerachian, Reza; Karamouz, Mohammad

    2009-01-01

    In this paper, a new Cooperative Trading Discharge Permit (CTDP) methodology is designed for estimating equitable and efficient treatment cost allocation among dischargers in a river system considering their conflicting interests. The methodology consists of two main steps: (1) initial treatment cost allocation and (2) equitable treatment cost reallocation. In the first step, a Pareto front among objectives is developed using a powerful and recently developed multi-objective genetic algorithm known as Nondominated Sorting Genetic Algorithm-II (NSGA-II). The objectives of the optimization model are considered to be the average treatment level of dischargers and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. The best non-dominated solution on the Pareto front, which provides the initial cost allocation to dischargers, is selected using the Young Bargaining Theory (YBT). In the second step, some cooperative game theoretic approaches are utilized to investigate how the maximum saving cost of participating dischargers in a coalition can be fairly allocated to them. The final treatment cost allocation provides the optimal trading discharge permit policies. The practical utility of the proposed methodology for river water quality management is illustrated through a realistic case study of the Zarjub river in the northern part of Iran. PMID:19657175

  4. A stochastic conflict resolution model for trading pollutant discharge permits in river systems.

    PubMed

    Niksokhan, Mohammad Hossein; Kerachian, Reza; Amin, Pedram

    2009-07-01

    This paper presents an efficient methodology for developing pollutant discharge permit trading in river systems considering the conflict of interests of involving decision-makers and the stakeholders. In this methodology, a trade-off curve between objectives is developed using a powerful and recently developed multi-objective genetic algorithm technique known as the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The best non-dominated solution on the trade-off curve is defined using the Young conflict resolution theory, which considers the utility functions of decision makers and stakeholders of the system. These utility functions are related to the total treatment cost and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. Finally, an optimization model provides the trading discharge permit policies. The practical utility of the proposed methodology in decision-making is illustrated through a realistic example of the Zarjub River in the northern part of Iran. PMID:18592387

  5. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    NASA Astrophysics Data System (ADS)

    Deb, Kousik; Dhar, Anirban; Purohit, Sandip

    2016-02-01

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.

  6. An optimized resistor pattern for temperature gradient control in microfluidics

    NASA Astrophysics Data System (ADS)

    Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline

    2009-06-01

    In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.

  7. Comparison of three methods for the optimal allocation of hydrological model participation in an Ensemble Prediction System

    NASA Astrophysics Data System (ADS)

    Brochero, D.; Anctil, F.; Gagné, C.

    2012-04-01

    Today, the availability of the Meteorological Ensemble Prediction Systems (MEPS) and its subsequent coupling with multiple hydrological models offer the possibility of building Hydrological Ensemble Prediction Systems (HEPS) consisting of a large number of members. However, this task is complex both in terms of the coupling of information and of the computational time, which may create an operational barrier. The evaluation of the prominence of each hydrological members can be seen as a non-parametric post-processing stage that seeks finding the optimal participation of the hydrological models (in a fashion similar to the Bayesian model averaging technique), maintaining or improving the quality of a probabilistic forecasts based on only x members drawn from a super ensemble of d members, thus allowing the reduction of the task required to issue the probabilistic forecast. The main objective of the current work consists in assessing the degree of simplification (reduction of the number of hydrological members) that can be achieved with a HEPS configured using 16 lumped hydrological models driven by the 50 weather ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF), i.e. an 800-member HEPS. In a previous work (Brochero et al., 2011a, b), we demonstrated that the proportion of members allocated to each hydrological model is a sufficient criterion to reduce the number of hydrological members while improving the balance of the scores, taking into account interchangeability of the ECMWF MEPS. Here, we compare the proportion of members allocated to each hydrological model derived from three non-parametric techniques: correlation analysis of hydrological members, Backward Greedy Selection (BGS) and Nondominated Sorting Genetic Algorithm (NSGA II). The last two techniques allude to techniques developed in machine learning, in a multicriteria framework exploiting the relationship between bias, reliability, and the number of members of the

  8. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. PMID:23664205

  9. Modeling and optimization of a multi-product biosynthesis factory for multiple objectives.

    PubMed

    Lee, Fook Choon; Pandu Rangaiah, Gade; Lee, Dong-Yup

    2010-05-01

    Genetic algorithms and optimization in general, enable us to probe deeper into the metabolic pathway recipe for multi-product biosynthesis. An augmented model for optimizing serine and tryptophan flux ratios simultaneously in Escherichia coli, was developed by linking the dynamic tryptophan operon model and aromatic amino acid-tryptophan biosynthesis pathways to the central carbon metabolism model. Six new kinetic parameters of the augmented model were estimated with considerations of available experimental data and other published works. Major differences between calculated and reference concentrations and fluxes were explained. Sensitivities and underlying competition among fluxes for carbon sources were consistent with intuitive expectations based on metabolic network and previous results. Biosynthesis rates of serine and tryptophan were simultaneously maximized using the augmented model via concurrent gene knockout and manipulation. The optimization results were obtained using the elitist non-dominant sorting genetic algorithm (NSGA-II) supported by pattern recognition heuristics. A range of Pareto-optimal enzyme activities regulating the amino acids biosynthesis was successfully obtained and elucidated wherever possible vis-à-vis fermentation work based on recombinant DNA technology. The predicted potential improvements in various metabolic pathway recipes using the multi-objective optimization strategy were highlighted and discussed in detail. PMID:20051269

  10. A preference-based multi-objective model for the optimization of best management practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Qiu, Jiali; Wei, Guoyuan; Shen, Zhenyao

    2015-01-01

    The optimization of best management practices (BMPs) at the watershed scale is notably complex because of the social nature of decision process, which incorporates information that reflects the preferences of decision makers. In this study, a preference-based multi-objective model was designed by modifying the commonly-used Non-dominated Sorting Genetic Algorithm (NSGA-II). The reference points, achievement scalarizing functions and an indicator-based optimization principle were integrated for searching a set of preferred Pareto-optimality solutions. Pareto preference ordering was also used for reducing objective numbers in the final decision-making process. This proposed model was then tested in a typical watershed in the Three Gorges Region, China. The results indicated that more desirable solutions were generated, which reduced the burden of decision effort of watershed managers. Compare to traditional Genetic Algorithm (GA), those preferred solutions were concentrated in a narrow region close to the projection point instead of the entire Pareto-front. Based on Pareto preference ordering, the solutions with the best objective function values were often the more desirable solutions (i.e., the minimum cost solution and the minimum pollutant load solution). In the authors' view, this new model provides a useful tool for optimizing BMPs at watershed scale and is therefore of great benefit to watershed managers.

  11. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  12. LID-BMPs planning for urban runoff control and the case study in China.

    PubMed

    Jia, Haifeng; Yao, Hairong; Tang, Ying; Yu, Shaw L; Field, Richard; Tafuri, Anthony N

    2015-02-01

    Low Impact Development Best Management Practices (LID-BMPs) have in recent years received much recognition as cost-effective measures for mitigating urban runoff impacts. In the present paper, a procedure for LID-BMPs planning and analysis using a comprehensive decision support tool was proposed. A case study was conducted to the planning of an LID-BMPs implementation effort at a college campus in Foshan, Guangdong Province, China. By examining information obtained, potential LID-BMPs were first selected. SUSTAIN was then used to analyze four runoff control scenarios, namely: pre-development scenario; basic scenario (existing campus development plan without BMP control); Scenario 1 (least-cost BMPs implementation); and, Scenario 2 (maximized BMPs performance). A sensitivity analysis was also performed to assess the impact of the hydrologic and water quality parameters. The optimal solution for each of the two LID-BMPs scenarios was obtained by using the non-dominated sorting genetic algorithm-II (NSGA-II). Finally, the cost-effectiveness of the LID-BMPs implementation scenarios was examined by determining the incremental cost for a unit improvement of control. PMID:25463572

  13. A niched Pareto tabu search for multi-objective optimal design of groundwater remediation systems

    NASA Astrophysics Data System (ADS)

    Yang, Yun; Wu, Jianfeng; Sun, Xiaomin; Wu, Jichun; Zheng, Chunmiao

    2013-05-01

    This study presents a new multi-objective optimization method, the niched Pareto tabu search (NPTS), for optimal design of groundwater remediation systems. The proposed NPTS is then coupled with the commonly used flow and transport code, MODFLOW and MT3DMS, to search for the near Pareto-optimal tradeoffs of groundwater remediation strategies. The difference between the proposed NPTS and the existing multiple objective tabu search (MOTS) lies in the use of the niche selection strategy and fitness archiving to maintain the diversity of the optimal solutions along the Pareto front and avoid repetitive calculations of the objective functions associated with the flow and transport model. Sensitivity analysis of the NPTS parameters is evaluated through a synthetic pump-and-treat remediation application involving two conflicting objectives, minimizations of both remediation cost and contaminant mass remaining in the aquifer. Moreover, the proposed NPTS is applied to a large-scale pump-and-treat groundwater remediation system of the field site at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts, involving minimizations of both total pumping rates and contaminant mass remaining in the aquifer. Additional comparison of the results based on the NPTS with those obtained from other two methods, namely the single objective tabu search (SOTS) and the nondominated sorting genetic algorithm II (NSGA-II), further indicates that the proposed NPTS has desirable computation efficiency, stability, and robustness and is a promising tool for optimizing the multi-objective design of groundwater remediation systems.

  14. An optimal design of wind turbine and ship structure based on neuro-response surface method

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  15. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. Solving molecular docking problems with multi-objective metaheuristics.

    PubMed

    García-Godoy, María Jesús; López-Camacho, Esteban; García-Nieto, José; Aldana-Montes, Antonio J Nebroand José F

    2015-01-01

    Molecular docking is a hard optimization problem that has been tackled in the past with metaheuristics, demonstrating new and challenging results when looking for one objective: the minimum binding energy. However, only a few papers can be found in the literature that deal with this problem by means of a multi-objective approach, and no experimental comparisons have been made in order to clarify which of them has the best overall performance. In this paper, we use and compare, for the first time, a set of representative multi-objective optimization algorithms applied to solve complex molecular docking problems. The approach followed is focused on optimizing the intermolecular and intramolecular energies as two main objectives to minimize. Specifically, these algorithms are: two variants of the non-dominated sorting genetic algorithm II (NSGA-II), speed modulation multi-objective particle swarm optimization (SMPSO), third evolution step of generalized differential evolution (GDE3), multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S-metric evolutionary multi-objective optimization (SMS-EMOA). We assess the performance of the algorithms by applying quality indicators intended to measure convergence and the diversity of the generated Pareto front approximations. We carry out a comparison with another reference mono-objective algorithm in the problem domain (Lamarckian genetic algorithm (LGA) provided by the AutoDock tool). Furthermore, the ligand binding site and molecular interactions of computed solutions are analyzed, showing promising results for the multi-objective approaches. In addition, a case study of application for aeroplysinin-1 is performed, showing the effectiveness of our multi-objective approach in drug discovery. PMID:26042856

  17. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  18. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  19. SAGE II Version 7.00 Release

    Atmospheric Science Data Center

    2013-07-10

    ... algorithms from SAGE III v4.00 Ceased removal of the water vapor extinction in the 600nm channel due to uncertainty in the H2O spectroscopy in this spectral band Updated our estimation of the SAGE II ...

  20. Construction of an Algorithm for Stem Recognition in the Hebrew Language. Application of Hebrew Morphology to Computer Techniques for Investigation of Word Roots. Final Report, Part II. Noun Reference Dictionary, Verbal Derivatives. Part I.

    ERIC Educational Resources Information Center

    Lazewnik, Grainom

    This document comprises the first part of the section of the Noun Reference Dictionary concerned with nouns derived from verb roots. See AL 002 270 for Part II. The format of this section is the same as that described in AL 002 267 for the pure nominal section of the dictionary. Roots are indicated. For other related documents, see ED 019 668, AL…

  1. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504

  2. Hydraulic design of a low-specific speed Francis runner for a hydraulic cooling tower

    NASA Astrophysics Data System (ADS)

    Ruan, H.; Luo, X. Q.; Liao, W. L.; Zhao, Y. P.

    2012-11-01

    The air blower in a cooling tower is normally driven by an electromotor, and the electric energy consumed by the electromotor is tremendous. The remaining energy at the outlet of the cooling cycle is considerable. This energy can be utilized to drive a hydraulic turbine and consequently to rotate the air blower. The purpose of this project is to recycle energy, lower energy consumption and reduce pollutant discharge. Firstly, a two-order polynomial is proposed to describe the blade setting angle distribution law along the meridional streamline in the streamline equation. The runner is designed by the point-to-point integration method with a specific blade setting angle distribution. Three different ultra-low-specificspeed Francis runners with different wrap angles are obtained in this method. Secondly, based on CFD numerical simulations, the effects of blade setting angle distribution on pressure coefficient distribution and relative efficiency have been analyzed. Finally, blade angles of inlet and outlet and control coefficients of blade setting angle distribution law are optimal variables, efficiency and minimum pressure are objective functions, adopting NSGA-II algorithm, a multi-objective optimization for ultra-low-specific speed Francis runner is carried out. The obtained results show that the optimal runner has higher efficiency and better cavitation performance.

  3. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  4. Multimethod evolutionary search for the regional calibration of rainfall-runoff models

    NASA Astrophysics Data System (ADS)

    Lombardi, Laura; Castiglioni, Simone; Toth, Elena; Castellarin, Attilio; Montanari, Alberto

    2010-05-01

    The study focuses on regional calibration for a generic rainfall-runoff model. The maximum likelihood function in the spectral domain proposed by Whittle is approximated in the time domain by maximising the simultaneous fit (through a multiobjective optimisation) of selected statistics of streamflow values, with the aim to propose a calibration procedure that can be applied at regional scale. The method may in fact be applied without the availability of actual time series of streamflow observations, since it is based exclusively on the selected statistics, that are here obtained on the basis of the dominant climate and catchment characteristics, through regional regression relationships. The multiobjective optimisation was carried out by using a recently proposed multimethod evolutionary search algorithm (AMALGAM, Vrugt and Robinson, 2007), that runs simultaneously, for population evolution, a set of different optimisation methods (namely NSGA-II, Differential Evolution, Adaptive Metropolis Search and Particle Swarm Optimisation), resulting in a combination of the respective strengths by adaptively updating the weights of these individual methods based on their reproductive success. This ensures a fast, reliable and computationally efficient solution to multiobjective optimisation problems. The proposed technique is applied to the case study of some catchments located in central Italy, which are treated as ungauged and are located in a region where detailed hydrological and geomorfoclimatic information is available. The results obtained with the regional calibration are compared with those provided by a classical least squares calibration in the time domain. The outcomes of the analysis confirm the potentialities of the proposed methodology.

  5. Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation

    NASA Astrophysics Data System (ADS)

    Bai, T.; Jin, W.

    2015-12-01

    Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.

  6. Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost

    NASA Astrophysics Data System (ADS)

    Li, Yanhe

    A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.

  7. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. PMID:24602860

  8. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  9. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-12-01

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives. PMID:26601975

  10. A multiobjective optimization approach to the operation and investment of the national energy and transportation systems

    NASA Astrophysics Data System (ADS)

    Ibanez, Eduardo

    Most U.S. energy usage is for electricity production and vehicle transportation, two interdependent infrastructures. The strength and number of the interdependencies will increase rapidly as hybrid electric transportation systems, including plug-in hybrid electric vehicles and hybrid electric trains, become more prominent. There are several new energy supply technologies reaching maturity, accelerated by public concern over global warming. The National Energy and Transportation Planning Tool (NETPLAN) is the implementation of the long-term investment and operation model for the transportation and energy networks. An evolutionary approach with underlying fast linear optimization are in place to determine the solutions with the best investment portfolios in terms of cost, resiliency and sustainability, i.e., the solutions that form the Pareto front. The popular NSGA-II algorithm is used as the base for the multiobjective optimization and metrics are developed for to evaluate the energy and transportation portfolios. An integrating approach to resiliency is presented, allowing the evaluation of high-consequence events, like hurricanes or widespread blackouts. A scheme to parallelize the multiobjective solver is presented, along with a decomposition method for the cost minimization program. The modular and data-driven design of the software is presented. The modeling tool is applied in a numerical example to optimize the national investment in energy and transportation in the next 40 years.

  11. Nonorthogonal orbital based N-body reduced density matrices and their applications to valence bond theory. II. An efficient algorithm for matrix elements and analytical energy gradients in VBSCF method.

    PubMed

    Chen, Zhenhua; Chen, Xun; Wu, Wei

    2013-04-28

    In this paper, by applying the reduced density matrix (RDM) approach for nonorthogonal orbitals developed in the first paper of this series, efficient algorithms for matrix elements between VB structures and energy gradients in valence bond self-consistent field (VBSCF) method were presented. Both algorithms scale only as nm(4) for integral transformation and d(2)n(β)(2) for VB matrix elements and 3-RDM evaluation, while the computational costs of other procedures are negligible, where n, m, d, and n(β )are the numbers of variable occupied active orbitals, basis functions, determinants, and active β electrons, respectively. Using tensor properties of the energy gradients with respect to the orbital coefficients presented in the first paper of this series, a partial orthogonal auxiliary orbital set was introduced to reduce the computational cost of VBSCF calculation in which orbitals are flexibly defined. Test calculations on the Diels-Alder reaction of butadiene and ethylene have shown that the novel algorithm is very efficient for VBSCF calculations. PMID:23635124

  12. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  13. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  14. Preface to special section on ILAS-II: The Improved Limb Atmospheric Spectrometer-II

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideaki

    2006-10-01

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) was a solar-occultation satellite sensor designed to measure minor constituents associated with polar ozone depletion. ILAS-II was placed on board the Advanced Earth Observing Satellite-II (ADEOS-II, "Midori-II"), which was successfully launched on 14 December 2002 from the Tanegashima Space Center of the Japan Aerospace Exploration Agency (JAXA). After an initial check of the instruments, ILAS-II made routine measurements for about 7 months, from 2 April 2003 to 24 October 2003, a period that included the formation and collapse of an Antarctic ozone hole in 2003, one of the largest in history. This paper introduces a special section containing papers on ILAS-II instrumental and on-orbit characteristics, several validation results of ILAS-II data processed with the version 1.4 data processing algorithm, and scientific analyses of polar stratospheric chemistry and dynamics using ILAS-II data.

  15. Prediction of a Flash Flood in Complex Terrain. Part II: A Comparison of Flood Discharge Simulations Using Rainfall Input from Radar, a Dynamic Model, and an Automated Algorithmic System.

    NASA Astrophysics Data System (ADS)

    Yates, David N.; Warner, Thomas T.; Leavesley, George H.

    2000-06-01

    Three techniques were employed for the estimation and prediction of precipitation from a thunderstorm that produced a flash flood in the Buffalo Creek watershed located in the mountainous Front Range near Denver, Colorado, on 12 July 1996. The techniques included 1) quantitative precipitation estimation using the National Weather Service's Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research's S-band, dual-polarization radars, 2) quantitative precipitation forecasting utilizing a dynamic model, and 3) quantitative precipitation forecasting using an automated algorithmic system for tracking thunderstorms. Rainfall data provided by these various techniques at short timescales (6 min) and at fine spatial resolutions (150 m to 2 km) served as input to a distributed-parameter hydrologic model for analysis of the flash flood. The quantitative precipitation estimates from the weather radar demonstrated their ability to aid in simulating a watershed's response to precipitation forcing from small-scale, convective weather in complex terrain. That is, with the radar-based quantitative precipitation estimates employed as input, the simulated peak discharge was similar to that estimated. The dynamic model showed the most promise in providing a significant forecast lead time for this flash-flood event. The algorithmic system did not show as much skill in comparison with the dynamic model in providing precipitation forcing to the hydrologic model. The discharge forecasts based on the dynamic-model and algorithmic-system inputs point to the need to improve the ability to forecast convective storms, especially if models such as these eventually are to be used in operational flood forecasting.

  16. Juno II

    NASA Technical Reports Server (NTRS)

    1959-01-01

    The Juno II launch vehicle, shown here, was a modified Jupiter Intermediate-Range Ballistic missionile, developed by Dr. Wernher von Braun and the rocket team at Redstone Arsenal in Huntsville, Alabama. Between December 1958 and April 1961, the Juno II launched space probes Pioneer III and IV, as well as Explorer satellites VII, VIII and XI.

  17. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  18. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. I. Thermal evolution of Ganymede and implications for surface features. II. Magnetohydrodynamic constraints on deep zonal flow in the giant planets. III. A fast finite-element algorithm for two-dimensional photoclinometry

    SciTech Connect

    Kirk, R.L.

    1987-01-01

    Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.

  1. Photosystem II

    ScienceCinema

    James Barber

    2010-09-01

    James Barber, Ernst Chain Professor of Biochemistry at Imperial College, London, gives a BSA Distinguished Lecture titled, "The Structure and Function of Photosystem II: The Water-Splitting Enzyme of Photosynthesis."

  2. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  3. Optimum design of phononic crystal perforated plate structures for widest bandgap of fundamental guided wave modes and maximized in-plane stiffness

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, Mohammad; Ng, Ching-Tai

    2016-04-01

    This paper presents a topology optimization of single material phononic crystal plate (PhP) to be produced by perforation of a uniform background plate. The primary objective of this optimization study is to explore widest exclusive bandgaps of fundamental (first order) symmetric or asymmetric guided wave modes as well as widest complete bandgap of mixed wave modes (symmetric and asymmetric). However, in the case of single material porous phononic crystals the bandgap width essentially depends on the resultant structural integration introduced by achieved unitcell topology. Thinner connections of scattering segments (i.e. lower effective stiffness) generally lead to (i) wider bandgap due to enhanced interfacial reflections, and (ii) lower bandgap frequency range due to lower wave speed. In other words higher relative bandgap width (RBW) is produced by topology with lower effective stiffness. Hence in order to study the bandgap efficiency of PhP unitcell with respect to its structural worthiness, the in-plane stiffness is incorporated in optimization algorithm as an opposing objective to be maximized. Thick and relatively thin Polysilicon PhP unitcells with square symmetry are studied. Non-dominated sorting genetic algorithm NSGA-II is employed for this multi-objective optimization problem and modal band analysis of individual topologies is performed through finite element method. Specialized topology initiation, evaluation and filtering are applied to achieve refined feasible topologies without penalizing the randomness of genetic algorithm (GA) and diversity of search space. Selected Pareto topologies are presented and gradient of RBW and elastic properties in between the two Pareto front extremes are investigated. Chosen intermediate Pareto topology, even not extreme topology with widest bandgap, show superior bandgap efficiency compared with the results reported in other works on widest bandgap topology of asymmetric guided waves, available in the literature

  4. The BRUSH algorithm for two-electron integrals on GPU

    NASA Astrophysics Data System (ADS)

    Rák, Ádám; Cserey, György

    2015-02-01

    This Letter presents a new algorithmic method developed to evaluate two-electron repulsion integrals based on contracted Gaussian basis functions in a parallel way. This new algorithm scheme provides distinct SIMD (single instruction multiple data) optimized paths which symbolically transforms integral parameters into target integral algorithms. Our measurements indicate that the method gives a significant improvement over the CPU-friendly PRISM algorithm. The benchmark tests (evaluation of more than 108 integrals using the STO-3G basis set) of our GPU (NVIDIA GTX 780) implementation showed up to 750-fold speedup compared to a single core of Athlon II X4 635 CPU.

  5. Current status and early result of the ILAS-II onboard the ADEOS-II satellite

    NASA Astrophysics Data System (ADS)

    Nakajima, H.; Sugita, T.; Yokota, T.; Kanzawa, H.; Kobayashi, H.; Sasano, Y.

    2003-04-01

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) onboard the Advanced Earth Observing Satellite-II (ADEOS-II) was successfully launched on 14 December, 2002 from NASDA's Tanegashima Space Center. ILAS-II is a solar-occultation atmospheric sensor which will measure vertical profiles of O_3, HNO_3, NO_2, N_2O, CH_4, H_2O, ClONO_2, aerosol extinction coefficients etc. with four grating spectrometers. After the initial checkout of the ILAS-II which is scheduled in January-February, 2003, ILAS-II will make routine measurements from early April. A validation campaign is scheduled to be taken place in Kiruna, Sweden in which several balloon-borne measurements are planned. Preliminary data from ILAS-II on both northern and southern polar regions using the latest data retrieval algorithm will be presented.

  6. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  7. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  8. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  9. SAGE II

    Atmospheric Science Data Center

    2016-02-16

    ... of stratospheric aerosols, ozone, nitrogen dioxide, water vapor and cloud occurrence by mapping vertical profiles and calculating ... (i.e. MLS and SAGE III versus HALOE) Fixed various bugs Details are in the  SAGE II V7.00 Release Notes .   ...

  10. Juno II

    NASA Technical Reports Server (NTRS)

    1959-01-01

    Wernher von Braun and his team were responsible for the Jupiter-C hardware. The family of launch vehicles developed by the team also came to include the Juno II, which was used to launch the Pioneer IV satellite on March 3, 1959. Pioneer IV passed within 37,000 miles of the Moon before going into solar orbit.

  11. Welding II.

    ERIC Educational Resources Information Center

    Allegheny County Community Coll., Pittsburgh, PA.

    Instructional objectives and performance requirements are outlined in this course guide for Welding II, a performance-based course offered at the Community College of Allegheny County to introduce students to out-of-position shielded arc welding with emphasis on proper heats, electrode selection, and alternating/direct currents. After introductory…

  12. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  13. Selecting robust solutions from a trade-off surface through the evaluation of the distribution of parameter sets in objective space and parameter space

    NASA Astrophysics Data System (ADS)

    Dumedah, G.; Berg, A. A.; Wineberg, M.

    2009-12-01

    Hydrological models are increasingly been calibrated using multi-objective genetic algorithms (GAs). Multi-objective GAs facilitate the evaluation of several model evaluation objectives and the examination of massive combinations of parameter sets. Usually, the outcome is a set of several equally-accurate parameter sets which make-up a trade-off surface between the objective functions often referred to as Pareto set. The Pareto set describes a decision-front in a way that each solution has unique values in parameter space with competing accuracy in objective space. An automated framework of choosing a single from such a trade-off surface has not been thoroughly investigated in the model calibration literature. As a result, this presentation will demonstrate an automated selection of robust solutions from a trade-off surface using the distribution of solutions in both objective space and parameter space. The trade-off surface was generated using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to calibrate the Soil and Water Assessment Tool (SWAT) for streamflow simulation based on model bias and root mean square error. Our selection method generates solutions with unique properties including a representative pathway in parameter space, a basin of attraction or the center of mass in objective space, and a proximity to the origin in objective space. Additionally, our framework determines a robust solution as a balanced compromise for the distribution of solutions in objective space and parameter space. That is, the robust solution emphasizes stability in model parameter values and in objective function values in a way that similarity in parameter space implies similarity in objective space.

  14. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  15. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  16. Filtering algorithm for dotted interferences

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.

    2011-09-01

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  17. Multi-Objective Calibration of Hydrological Model Parameters Using MOSCEM-UA

    NASA Astrophysics Data System (ADS)

    Wang, Yuhui; Lei, Xiaohui; Jiang, Yunzhong; Wang, Hao

    2010-05-01

    In the past two decades, many evolutionary algorithms have been adopted in the auto-calibration of hydrological model such as NSGA-II, SCEM, etc., some of which has shown ideal performance. In this article, a detailed hydrological model auto-calibration algorithm Multi-objective Shuffled Complex Evolution Metropolis (MOSCEM-UA) has been introduced to carry out auto-calibration of hydrological model in order to clarify the equilibrium and the uncertainty of model parameters. The development and the implement flow chart of the advanced multi-objective algorithm (MOSCEM-UA) were interpreted in detail. Hymod, a conceptual hydrological model depending on Moore's concept, was then introduced as a lumped Rain-Runoff simulation approach with several principal parameters involved. The five important model parameters subjected to calibration includes maximum storage capacity, spatial variability of the soil moisture capacity, flow distributing factor between slow and quick reservoirs as well as slow tank and quick tank distribution factor. In this study, a test case on the up-stream area of KuanCheng hydrometric station in Haihe basin was studied to verify the performance of calibration. Two objectives including objective for high flow process and objective for low flow process are chosen in the process of calibration. The results emphasized that the interrelationship between objective functions could be described in correlation Pareto Front by using MOSCEM-UA. The Pareto Front can be draw after the iteration. Further more, post range of parameters corresponding to Pareto sets could also be drawn to identify the prediction range of the model. Then a set of balanced parameter was chosen to validate the model and the result showed an ideal prediction. Meanwhile, the correlation among parameters and their effects on the model performance could also be achieved.

  18. PORT II

    NASA Technical Reports Server (NTRS)

    Muniz, Beau

    2009-01-01

    One unique project that the Prototype lab worked on was PORT I (Post-landing Orion Recovery Test). PORT is designed to test and develop the system and components needed to recover the Orion capsule once it splashes down in the ocean. PORT II is designated as a follow up to PORT I that will utilize a mock up pressure vessel that is spatially compar able to the final Orion capsule.

  19. BORE II

    SciTech Connect

    2015-08-01

    Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migrate upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.

  20. BORE II

    2015-08-01

    Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migratemore » upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.« less

  1. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  2. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  3. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  4. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  5. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  6. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  7. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  8. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  9. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  10. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  11. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  12. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  13. Development of an uncertainty technique using Bayesian methods to study the impact of climate change and land use change on solutions obtained by the BMP selection and placement optimization tool

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.

    2009-12-01

    A multi-objective genetic algorithm (NSGA-II) in combination with a watershed model (Soil and Water Assessment Tool (SWAT)) is used in an optimization framework for making the Best Management Practices (BMP) selection and placement decisions to reduce the nonpoint source (NPS) pollutants and the net cost for implementation of BMPs. Shuffled complex evolutionary metropolis uncertainty analysis (SCEM-UA) method will be used to quantify the uncertainty of the BMP selection and placement tool. The sources of input uncertainty for the tool include the uncertainties in the estimation of economic costs for the implementation of BMPs, and input SWAT model predictions at field level. The SWAT model predictions are in turn influenced by the model parameters and the input climate forcing such as precipitation and temperature which in turn are affected due to the changing climate, and the changing land use in the watershed. The optimization tool is also influenced by the operational parameters of the genetic algorithm. The SCEM-UA method will be initiated using a uniform distribution for the range of the model parameters and the input sources of uncertainty to estimate the posterior probability distribution of the model response variables. This methodology will be applied to estimate the uncertainty in the BMP selection and placement in Wildcat Creek Watershed located in northcentral Indiana. Nitrogen, phosphorus, sediment, and pesticide are the various NPS pollutants that will be reduced through implementation of BMPs in the watershed. The uncertainty bounds around the Pareto-optimal fronts after the optimization will provide the watershed management groups a clear insight on how the desired water quality goals could be realistically met for the least amount of money that is available for BMP implementation in the watershed.

  14. Optimal design of tunable phononic bandgap plates under equibiaxial stretch

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, M. S.; Guest, James K.

    2016-05-01

    Design and application of phononic crystal (PhCr) acoustic metamaterials has been a topic with tremendous growth of interest in the last decade due to their promising capabilities to manipulate acoustic and elastodynamic waves. Phononic controllability of waves through a particular PhCr is limited only to the spectrums located within its fixed bandgap frequency. Hence the ability to tune a PhCr is desired to add functionality over its variable bandgap frequency or for switchability. Deformation induced bandgap tunability of elastomeric PhCr solids and plates with prescribed topology have been studied by other researchers. Principally the internal stress state and distorted geometry of a deformed phononic crystal plate (PhP) changes its effective stiffness and leads to deformation induced tunability of resultant modal band structure. Thus the microstructural topology of a PhP can be altered so that specific tunability features are met through prescribed deformation. In the present study novel tunable PhPs of this kind with optimized bandgap efficiency-tunability of guided waves are computationally explored and evaluated. Low loss transmission of guided waves throughout thin walled structures makes them ideal for fabrication of low loss ultrasound devices and structural health monitoring purposes. Various tunability targets are defined to enhance or degrade complete bandgaps of plate waves through macroscopic tensile deformation. Elastomeric hyperelastic material is considered which enables recoverable micromechanical deformation under tuning finite stretch. Phononic tunability through stable deformation of phononic lattice is specifically required and so any topology showing buckling instability under assumed deformation is disregarded. Nondominated sorting genetic algorithm (GA) NSGA-II is adopted for evolutionary multiobjective topology optimization of hypothesized tunable PhP with square symmetric unit-cell and relevant topologies are analyzed through finite

  15. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    NASA Astrophysics Data System (ADS)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  16. A new multi-objective approach to finite element model updating

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Cho, Soojin; Jung, Hyung-Jo; Lee, Jong-Jae; Yun, Chung-Bang

    2014-05-01

    The single objective function (SOF) has been employed for the optimization process in the conventional finite element (FE) model updating. The SOF balances the residual of multiple properties (e.g., modal properties) using weighting factors, but the weighting factors are hard to determine before the run of model updating. Therefore, the trial-and-error strategy is taken to find the most preferred model among alternative updated models resulted from varying weighting factors. In this study, a new approach to the FE model updating using the multi-objective function (MOF) is proposed to get the most preferred model in a single run of updating without trial-and-error. For the optimization using the MOF, non-dominated sorting genetic algorithm-II (NSGA-II) is employed to find the Pareto optimal front. The bend angle related to the trade-off relationship of objective functions is used to select the most preferred model among the solutions on the Pareto optimal front. To validate the proposed approach, a highway bridge is selected as a test-bed and the modal properties of the bridge are obtained from the ambient vibration test. The initial FE model of the bridge is built using SAP2000. The model is updated using the identified modal properties by the SOF approach with varying the weighting factors and the proposed MOF approach. The most preferred model is selected using the bend angle of the Pareto optimal front, and compared with the results from the SOF approach using varying the weighting factors. The comparison shows that the proposed MOF approach is superior to the SOF approach using varying the weighting factors in getting smaller objective function values, estimating better updated parameters, and taking less computational time.

  17. Atmospheric environment monitoring by the ILAS-II onboard the ADEOS-II satellite

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideaki; Sugita, Takafumi; Yokota, Tatsuya; Sasano, Yasuhiro

    2004-11-01

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) onboard the Advanced Earth Observing Satellite-II (ADEOS-II) was successfully launched on 14 December, 2002 from Japan Aerospace Exploration Agency (JAXA)'s Tanegashima Space Center. ILAS-II is a solar-occultation atmospheric sensor which measures vertical profiles of O3, HNO3, NO2, N2O, CH4, H2O, ClONO2, aerosol extinction coefficients etc. with four grating spectrometers. After the checkout period of the ILAS-II, ILAS-II started its routine operation since 2 April 2003 until 24 October 2003, when ADEOS-II lost its function due to solar-paddle failure. However, about 7 months of data were acquired by ILAS-II including whole period of Antarctic ozone hole in 2003 when ozone depletion was one of the largest up to now. ILAS-II successfully measured vertical profiles of ozone, nitric acid, nitrous oxide, and aerosol extinction coefficients due to Polar Stratospheric Clouds (PSCs) during this ozone hole period. The ILAS-II data with the latest data retrieval algorithm of Version 1.4 shows fairly good agreement with correlative ozonesonde measurements within 15% accuracy.

  18. SLAP lesions: a treatment algorithm.

    PubMed

    Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf

    2016-02-01

    Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair. PMID:26818554

  19. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  20. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  1. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  2. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  3. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  4. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  5. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  6. Inclusive jet production using the kt algorithm

    SciTech Connect

    Norniella, Olga; /Barcelona, IFAE

    2006-05-01

    Results on inclusive jet production using the k{sub T} algorithm in proton-antiproton collisions at {radical}s = 1.96 TeV are presented, based on 1 fb{sup -1} of CDF Run II data. The measurements are carried out for jets with p{sub T}{sup jet} > 54 GeV/c in five different jet rapidity regions up to |y{sub jet}| = 2.1. The measured cross sections are corrected to the hadron level and compared to next-to-leading order perturbative QCD predictions (NLO pQCD).

  7. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  8. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  9. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  10. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  11. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  12. A star pattern recognition algorithm for autonomous attitude determination

    NASA Technical Reports Server (NTRS)

    Van Bezooijen, R. W. H.

    1990-01-01

    The star-pattern recognition algorithm presented allows the advanced Full-sky Autonomous Star Tracker (FAST) device, such as the projected ASTROS II system of the Mariner Mark II planetary spacecraft, to reliably ascertain attitude about all three axes. An ASTROS II-based FAST, possessing an 11.5 x 11.5 deg field of view and 8-arcsec accuracy, can when integrated with an all-sky data base of 4100 guide stars determine its attitude in about 1 sec, with a success rate close to 100 percent. The present recognition algorithm can also be used for automating the acquisition of celestial targets by astronomy telescopes, autonomously updating the attitude of gyro-based attitude control systems, and automating ground-based attitude reconstruction.

  13. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  14. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  15. Discovery of a phosphor for light emitting diode applications and its structural determination, Ba(Si,Al)5(O,N)8:Eu2+.

    PubMed

    Park, Woon Bae; Singh, Satendra Pal; Sohn, Kee-Sun

    2014-02-12

    Most of the novel phosphors that appear in the literature are either a variant of well-known materials or a hybrid material consisting of well-known materials. This situation has actually led to intellectual property (IP) complications in industry and several lawsuits have been the result. Therefore, the definition of a novel phosphor for use in light-emitting diodes should be clarified. A recent trend in phosphor-related IP applications has been to focus on the novel crystallographic structure, so that a slight composition variance and/or the hybrid of a well-known material would not qualify from either a scientific or an industrial point of view. In our previous studies, we employed a systematic materials discovery strategy combining heuristics optimization and a high-throughput process to secure the discovery of genuinely novel and brilliant phosphors that would be immediately ready for use in light emitting diodes. Despite such an achievement, this strategy requires further refinement to prove its versatility under any circumstance. To accomplish such demands, we improved our discovery strategy by incorporating an elitism-involved nondominated sorting genetic algorithm (NSGA-II) that would guarantee the discovery of truly novel phosphors in the present investigation. Using the improved discovery strategy, we discovered an Eu(2+)-doped AB5X8 (A = Sr or Ba, B = Si and Al, X = O and N) phosphor in an orthorhombic structure (A21am) with lattice parameters a = 9.48461(3) Å, b = 13.47194(6) Å, c = 5.77323(2) Å, α = β = γ = 90°, which cannot be found in any of the existing inorganic compound databases. PMID:24437942

  16. Multi-objective design optimization of the transverse gaseous jet in supersonic flows

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Yang, Jun; Yan, Li

    2014-01-01

    The mixing process between the injectant and the supersonic crossflow is one of the important issues for the design of the scramjet engine, and the efficiency mixing has a great impact on the improvement of the combustion efficiency. A hovering vortex is formed between the separation region and the barrel shock wave, and this may be induced by the large negative density gradient. The separation region provides a good mixing area for the injectant and the subsonic boundary layer. In the current study, the transverse injection flow field with a freestream Mach number of 3.5 has been optimized by the non-dominated sorting genetic algorithm (NSGA II) coupled with the Kriging surrogate model; and the variance analysis method and the extreme difference analysis method have been employed to evaluate the values of the objective functions. The obtained results show that the jet-to-crossflow pressure ratio is the most important design variable for the transverse injection flow field, and the injectant molecular weight and the slot width should be considered for the mixing process between the injectant and the supersonic crossflow. There exists an optimal penetration height for the mixing efficiency, and its value is about 14.3 mm in the range considered in the current study. The larger penetration height provides a larger total pressure loss, and there must be a tradeoff between these two objection functions. In addition, this study demonstrates that the multi-objective design optimization method with the data mining technique can be used efficiently to explore the relationship between the design variables and the objective functions.

  17. Data bank homology search algorithm with linear computation complexity.

    PubMed

    Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A

    1994-06-01

    A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given. PMID:7922689

  18. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  19. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  20. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  1. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  2. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  3. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  4. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  5. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  6. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  7. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  8. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  9. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  10. A semisimultaneous inversion algorithm for SAGE III

    NASA Astrophysics Data System (ADS)

    Ward, Dale M.

    2002-12-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.

  11. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  12. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  13. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  15. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  16. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  17. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and

  18. A software tool for graphically assembling damage identification algorithms

    NASA Astrophysics Data System (ADS)

    Allen, David W.; Clough, Joshua A.; Sohn, Hoon; Farrar, Charles R.

    2003-08-01

    At Los Alamos National Laboratory (LANL), various algorithms for structural health monitoring problems have been explored in the last 5 to 6 years. The original DIAMOND (Damage Identification And MOdal aNalysis of Data) software was developed as a package of modal analysis tools with some frequency domain damage identification algorithms included. Since the conception of DIAMOND, the Structural Health Monitoring (SHM) paradigm at LANL has been cast in the framework of statistical pattern recognition, promoting data driven damage detection approaches. To reflect this shift and to allow user-friendly analyses of data, a new piece of software, DIAMOND II is under development. The Graphical User Interface (GUI) of the DIAMOND II software is based on the idea of GLASS (Graphical Linking and Assembly of Syntax Structure) technology, which is currently being implemented at LANL. GLASS is a Java based GUI that allows drag and drop construction of algorithms from various categories of existing functions. In the platform of the underlying GLASS technology, DIAMOND II is simply a module specifically targeting damage identification applications. Users can assemble various routines, building their own algorithms or benchmark testing different damage identification approaches without writing a single line of code.

  19. An algorithm for constructing polynomial systems whose solution space characterizes quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Severyanov, Vasily M.

    2006-05-01

    An algorithm and its first implementation in C# are presented for assembling arbitrary quantum circuits on the base of Hadamard and Toffoli gates and for constructing multivariate polynomial systems over the finite field Z II arising when applying the Feynman's sum-over-paths approach to quantum circuits. The matrix elements determined by a circuit can be computed by counting the number of common roots in Z II for the polynomial system associated with the circuit. To determine the number of solutions in Z II for the output polynomial system, one can use the Grobner bases method and the relevant algorithms for computing Grobner bases.

  20. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  1. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  2. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  3. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  4. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  5. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  6. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  7. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  8. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  9. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  10. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  11. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  12. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  13. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  14. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  15. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  16. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  17. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  18. An assessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram.

    PubMed

    Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J

    2016-04-01

    Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of  -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and  -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of  -5.6 to 5.2 bpm and a bias of  -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available. PMID:27027672

  19. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  20. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  1. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  2. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  3. Crystal structure of ammonia dihydrate II.

    PubMed

    Griffiths, Gareth I G; Fortes, A Dominic; Pickard, Chris J; Needs, R J

    2012-05-01

    We have used density-functional-theory (DFT) methods together with a structure searching algorithm to make an experimentally constrained prediction of the structure of ammonia dihydrate II (ADH-II). The DFT structure is in good agreement with neutron diffraction data and verifies the prediction. The structure consists of the same basic structural elements as ADH-I, with a modest alteration to the packing, but a considerable reduction in volume. The phase diagram of the known ADH and ammonia monohydrate + water-ice structures is calculated with the Perdew-Burke-Ernzerhof density functional, and the effects of a semi-empirical dispersion corrected functional are investigated. The results of our DFT calculations of the finite-pressure elastic constants of ADH-II are compared with the available experimental data for the elastic strain coefficients. PMID:22583254

  4. A computational study of routing algorithms for realistic transportation networks

    SciTech Connect

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  5. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  6. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Astrophysics Data System (ADS)

    Bahethi, O. P.

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  7. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  8. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  9. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722

  10. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  11. Juno II (AM-14)

    NASA Technical Reports Server (NTRS)

    1959-01-01

    Juno II (AM-14) on the launch pad just prior to launch, March 3, 1959. The payload of AM-14 was Pioneer IV, America's first successful lunar mission. The Juno II was a modification of Jupiter ballistic missile

  12. Sparse Canonical Correlation Analysis: New Formulation and Algorithm.

    PubMed

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-05-24

    In this paper, we study canonical correlation analysis (CCA), which has become a powerful tool in multivariate data analysis for finding the correlations between two sets of multidimensional variables. The main contributions of the paper are: (i) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem; (ii) to obtain the explicit characterization of all solutions for the multiple CCA problem even the covariance matrices are singular; (iii) to develop a new sparse CCA algorithm; and (iv) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real world data sets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms. PMID:23712996

  13. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  14. Flexible, efficient and robust algorithm for parallel execution and coupling of components in a framework

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor

    2006-05-01

    We describe a general algorithm suitable for executing and coupling components of a software framework on a parallel computer. The requirements of a flexible, efficient and robust algorithm are defined precisely, and the motivation for the requirements is demonstrated on several examples. In short, the requirements are the following: (i) the algorithm should allow arbitrary distribution of processors among the components, (ii) it should allow arbitrary coupling schedule between the components, (iii) it should not use any inter-processor communication other than already required by the components and their couplings, and (iv) it should never get into a dead-lock. We show that the proposed algorithm based on the Temporal and Predefined Ordering of Tasks (TPOT) satisfies all these requirements. The TPOT algorithm has been implemented in the Space Weather Modeling Framework. The flexibility and efficiency of the algorithm is demonstrated with several examples.

  15. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  16. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  17. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  18. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  19. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  20. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  1. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  2. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  3. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  4. CORDIC algorithms in four dimensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc; Hsiao, Shen-Fu

    1990-11-01

    CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.

  5. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  6. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  7. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  8. The New Algorithm for Symbolic Network Analysis.

    NASA Astrophysics Data System (ADS)

    Chow, John Tsai-Chiang

    A new and highly efficient tree identification algorithm is derived here for obtaining the determinant and the cofactors of a circuit's node admittance matrix, and hence, for obtaining various symbolic network functions for one-port and two-port reciprocal and nonreciprocal networks, with the network's topological description as its input. The algorithm is so devised that it is practically memory-storage free, and it is simple enough that even a microcomputer can obtain symbolic network functions for a fairly large circuit in a reasonably short time. It is worth noting that the algorithm can handle topological branches with infinite admittance values. Making use of this special feature, we have derived a simple topological model for the ideal operational amplifier, hence providing the ability to obtain various topological formulas of operational amplifier circuits in a reasonable time. By choosing appropriate symbolic network functions, along with some measured transfer function data, the circuit's nominal element values, and a nonlinear-equation solving subroutine, we have constructed a computer program to perform analog circuit fault diagnosis. This program can identify which of a circuit's elements are faulty or out of design tolerances. In the course of this research we have also identified an application to a biological problem, one in which the resistor values of an electrical model of the guinea-pig cochlea can easily be deduced even when some nodes are inaccessible for measurements. All these features have been implemented on a very modest microcomputer, the Apple II. Obviously, a larger computer will not only accomplish the same result faster but also it will be capable of analyzing much larger circuits.

  9. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; Nowak, M. A.

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  10. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  11. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  12. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  13. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  14. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  15. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525

  16. SEU-tolerant IQ detection algorithm for LLRF accelerator system

    NASA Astrophysics Data System (ADS)

    Grecki, M.

    2007-08-01

    High-energy accelerators use RF field to accelerate charged particles. Measurements of effective field parameters (amplitude and phase) are tasks of great importance in these facilities. The RF signal is downconverted in frequency but keeping the information about amplitude and phase and then sampled in ADC. One of the several tasks for LLRF control system is to estimate the amplitude and phase (or I and Q components) of the RF signal. These parameters are further used in the control algorithm. The XFEL accelerator will be built using a single-tunnel concept. Therefore electronic devices (including LLRF control system) will be exposed to ionizing radiation, particularly to a neutron flux generating SEUs in digital circuits. The algorithms implemented in FPGA/DSP should therefore be SEU-tolerant. This paper presents the application of the WCC method to obtain immunity of IQ detection algorithm to SEUs. The VHDL implementation of this algorithm in Xilinx Virtex II Pro FPGA is presented, together with results of simulation proving the algorithm suitability for systems operating in the presence of SEUs.

  17. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  18. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  19. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  20. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  1. An onboard star identification algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Kong; Femiano, Michael

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  2. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  3. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  4. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  5. An onboard star identification algorithm

    NASA Technical Reports Server (NTRS)

    Ha, Kong; Femiano, Michael

    1993-01-01

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  6. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  7. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  8. Type II universal spacetimes

    NASA Astrophysics Data System (ADS)

    Hervik, S.; Málek, T.; Pravda, V.; Pravdová, A.

    2015-12-01

    We study type II universal metrics of the Lorentzian signature. These metrics simultaneously solve vacuum field equations of all theories of gravitation with the Lagrangian being a polynomial curvature invariant constructed from the metric, the Riemann tensor and its covariant derivatives of an arbitrary order. We provide examples of type II universal metrics for all composite number dimensions. On the other hand, we have no examples for prime number dimensions and we prove the non-existence of type II universal spacetimes in five dimensions. We also present type II vacuum solutions of selected classes of gravitational theories, such as Lovelock, quadratic and L({{Riemann}}) gravities.

  9. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  10. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  11. Fully relativistic lattice Boltzmann algorithm

    SciTech Connect

    Romatschke, P.; Mendoza, M.; Succi, S.

    2011-09-15

    Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.

  12. High-speed CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    El-Guibaly, Fayez; Sabaa, A.

    1996-10-01

    In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.

  13. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).

  14. CORDIC Algorithms: Theory And Extensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc

    1989-11-01

    Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.

  15. Multithreaded Algorithms for Graph Coloring

    SciTech Connect

    Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex

    2012-10-21

    Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.

  16. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  17. Bis(thiosemicarbazonato) chelates of Co(II), Ni(II), Cu(II), Pd(II) and Pt(II)

    NASA Astrophysics Data System (ADS)

    Chandra, Sulekh; Singh, R.

    1985-01-01

    Bis chelates of Co(II), Ni(II), Cu(II), Pd(II) and Pt(II) with the enolic form of diethyl ketone and methyl n-propyl thiosemicarbazones were synthesized and characterized by elemental analyses, magnetic moments, i.r. and electronic and electron spin resonance spectral studies. All the complexes were found to have the composition ML 2 [where M = Co(II), Ni(II), Cu(II), Pd(ii) and Pt(II) and L = thiosemicarbazones of diethyl ketone and methyl n-propyl ketone]. Co(II) and Cu(II) complexes are paramagnetic and may have polymeric six-coordinate octahedral and square planar geometries, respectively. The Ni(II), Pd(II) and Pt(II) complexes are diamagnetic and may have square planar geometries. Pyridine adducts (ML 2·2Py) of Ni(II) and Cu(II) complexes were also prepared and characterized.

  18. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  19. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  20. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  1. Unidirectional rotating coordinate rotation digital computer algorithm based on rotational phase estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chaozhu; Han, Jinan; Yan, Huizhi

    2015-06-01

    The improved coordinate rotation digital computer (CORDIC) algorithm gives high precision and resolution phase rotation, but it has some shortages such as high iterations and big system delay. This paper puts forward unidirectional rotating CORDIC algorithm to solve these problems. First, using under-damping theory, a part of unidirectional phase rotations is carried out. Then, the threshold value of angle is determined based on phase rotation estimation method. Finally, rotation phase estimation completes the rest angle iterations. Furthermore, the paper simulates and implements the numerical control oscillator by Quartus II software and Modelsim software. According to the experimental results, the algorithm reduces iterations and judgment of sign bit, so that it decreases system delay and resource utilization and improves the throughput. We always analyze the error brought by this algorithm. It turned out that the algorithm has a good application prospect in global navigation satellite system and channelized receiver.

  2. Unidirectional rotating coordinate rotation digital computer algorithm based on rotational phase estimation.

    PubMed

    Zhang, Chaozhu; Han, Jinan; Yan, Huizhi

    2015-06-01

    The improved coordinate rotation digital computer (CORDIC) algorithm gives high precision and resolution phase rotation, but it has some shortages such as high iterations and big system delay. This paper puts forward unidirectional rotating CORDIC algorithm to solve these problems. First, using under-damping theory, a part of unidirectional phase rotations is carried out. Then, the threshold value of angle is determined based on phase rotation estimation method. Finally, rotation phase estimation completes the rest angle iterations. Furthermore, the paper simulates and implements the numerical control oscillator by Quartus II software and Modelsim software. According to the experimental results, the algorithm reduces iterations and judgment of sign bit, so that it decreases system delay and resource utilization and improves the throughput. We always analyze the error brought by this algorithm. It turned out that the algorithm has a good application prospect in global navigation satellite system and channelized receiver. PMID:26133856

  3. Ovarian Cancer Stage II

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Ovarian Cancer Stage II Add to My Pictures View /Download : ... 1650x675 View Download Large: 3300x1350 View Download Title: Ovarian Cancer Stage II Description: Three-panel drawing of stage ...

  4. World War II Homefront.

    ERIC Educational Resources Information Center

    Garcia, Rachel

    2002-01-01

    Presents an annotated bibliography that provides Web sites focusing on the U.S. homefront during World War II. Covers various topics such as the homefront, Japanese Americans, women during World War II, posters, and African Americans. Includes lesson plan sources and a list of additional resources. (CMK)

  5. LSPRAY-II: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2004-01-01

    LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.

  6. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  7. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  8. NSLS-II RF BEAM POSITION MONITOR

    SciTech Connect

    Vetter, K.; Della Penna, A. J.; DeLong, J.; Kosciuk, B.; Mead, J.; Pinayev, I.; Singh, O.; Tian, Y.; Ha, K.; Portmann, G.; Sebek J.

    2011-03-28

    An internal R&D program has been undertaken at BNL to develop a sub-micron RF Beam Position Monitor (BPM) for the NSLS-II 3rd generation light source that is currently under construction. The BPM R&D program started in August 2009. Successful beam tests were conducted 15 months from the start of the program. The NSLS-II RF BPM has been designed to meet all requirements for the NSLS-II Injection system and Storage Ring. Housing of the RF BPM's in +-0.1 C thermally controlled racks provide sub-micron stabilization without active correction. An active pilot-tone has been incorporated to aid long-term (8hr min) stabilization to 200nm RMS. The development of a sub-micron BPM for the NSLS-II has successfully demonstrated performance and stability. Pilot Tone calibration combiner and RF synthesizer has been implemented and algorithm development is underway. The program is currently on schedule to start production development of 60 Injection BPM's starting in the Fall of 2011. The production of {approx}250 Storage Ring BPM's will overlap the Injection schedule.

  9. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  10. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  11. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  12. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  13. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  14. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  15. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  16. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  17. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  18. Reconstruction of rainfall events responsible for landslides using an algorithm

    NASA Astrophysics Data System (ADS)

    Melillo, Massimo; Brunetti, Maria Teresa; Gariano, Stefano Luigi; Guzzetti, Fausto; Peruccacci, Silvia

    2014-05-01

    In Italy, intense or prolonged rainfall is the primary trigger of damaging landslides. The identification of the rainfall conditions responsible for the initiation of landslides is a crucial issue and may contribute to reduce landslide risk. Objective criteria for the identification of rainfall conditions that could initiate slope failures are still lacking or ambiguous. The reconstruction of rainfall events able to trigger past landslides is usually performed manually by expert investigators. Here, we propose an algorithm that reconstructs automatically rainfall events from a series of hourly rainfall data. The automatic reconstruction reproduces the actions performed by an expert investigator that adopts empirical rules to define rainfall conditions that presumably initiated the documented landslides. The algorithm, which is implemented in R (http://www.r-project.org), performs three actions on the data series: (i) removes isolated events with negligible amount of rainfall and random noise generated by the rain gauge; (ii) aggregates rainfall measurements in order to obtain a sequence of distinct rainfall events; (iii) identifies single or multiple rainfall conditions responsible for the slope failures. In particular, the algorithm calculates the duration, D, and the cumulated rainfall, E, for rainfall events, and for rainfall conditions that have resulted in landslides. A set of input parameters allows the automatic reconstruction of rainfall events in different physical settings and climatic conditions. We tested the algorithm using rainfall and landslide information available to us for Sicily, Southern Italy, in the period between January 2002 and December 2012. The algorithm reconstructed 13,537 rainfall events and 343 rainfall conditions as possible triggers of the 163 documented landslides. Most (87.7%) of the rainfall conditions obtained manually were reconstructed accurately. Use of the algorithm shall contribute to an objective and reproducible

  19. Characteristics and performance of the Improved Limb Atmospheric Spectrometer-II (ILAS-II) on board the ADEOS-II satellite

    NASA Astrophysics Data System (ADS)

    Nakajima, H.; Sugita, T.; Yokota, T.; Ishigaki, T.; Mogi, Y.; Araki, N.; Waragai, K.; Kimura, N.; Iwazawa, T.; Kuze, A.; Tanii, J.; Kawasaki, H.; Horikawa, M.; Togami, T.; Uemura, N.; Kobayashi, H.; Sasano, Y.

    2006-06-01

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) monitored components associated with Polar ozone depletion. ILAS-II was on board the Advanced Earth Observing Satellite-II (ADEOS-II, "Midori-II"), which was successfully launched on 14 December 2002 from the Tanegashima Space Center of the Japan Aerospace Exploration Agency (JAXA). ILAS-II used a solar occultation technique to measure vertical profiles of ozone (O3), nitric acid (HNO3), nitrogen dioxide (NO2), nitrous oxide (N2O), methane (CH4), water vapor (H2O), chlorine nitrate (ClONO2), dinitrogen pentoxide (N2O5), CFC-11, CFC-12 and aerosol extinction coefficients at high latitudes in both the Northern and Southern hemispheres. ILAS-II included Sun-tracking optics and four spectrometers, a Sun-edge sensor, and electronics. The four spectrometers measured in the infrared (channel 1) between 6.21 and 11.76 μm, in the midinfrared (channel 2) between 3.0 and 5.7 μm, at high resolution in the infrared (channel 3) between 12.78 and 12.85 μm, and in the visible (channel 4) between 753 and 784 nm. The vertical height of the entrance slit was 1 km at the tangent point. A Sun-edge sensor accurately registered tangent height. After an initial check of the instruments, ILAS-II recorded routine measurements for about 7 months, from 2 April 2003 to 24 October 2003, a period that included the formation and collapse of an Antarctic ozone hole in 2003 that was one of the largest in history. All of the ILAS-II data were processed using the version 1.4 data-processing algorithm. Validation analyses show promising results for some ILAS-II measurement species, which can be used to elucidate mechanisms of Polar ozone depletion. Studies are ongoing on ozone depletion, on the formation mechanisms of Polar stratospheric clouds, on denitrification, and on air mass descent. A state-of-the-art data retrieval algorithm that is currently being developed will yield more sophisticated data sets from the ILAS-II data in the near

  20. Belle II production system

    NASA Astrophysics Data System (ADS)

    Miyake, Hideki; Grzymkowski, Rafal; Ludacka, Radek; Schram, Malachi

    2015-12-01

    The Belle II experiment will record a similar quantity of data to LHC experiments and will acquire it at similar rates. This requires considerable computing, storage and network resources to handle not only data created by the experiment but also considerable amounts of simulated data. Consequently Belle II employs a distributed computing system to provide the resources coordinated by the the DIRAC interware. DIRAC is a general software framework that provides a unified interface among heterogeneous computing resources. In addition to the well proven DIRAC software stack, Belle II is developing its own extension called BelleDIRAC. BelleDIRAC provides a transparent user experience for the Belle II analysis framework (basf2) on various environments and gives access to file information managed by LFC and AMGA metadata catalog. By unifying DIRAC and BelleDIRAC functionalities, Belle II plans to operate an automated mass data processing framework named a “production system”. The Belle II production system enables large-scale raw data transfer from experimental site to raw data centers, followed by massive data processing, and smart data delivery to each remote site. The production system is also utilized for simulated data production and data analysis. Although development of the production system is still on-going, recently Belle II has prepared prototype version and evaluated it with a large scale simulated data production. In this presentation we will report the evaluation of the prototype system and future development plans.

  1. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  2. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  3. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  4. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  5. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  6. Joint optimization of algorithmic suites for EEG analysis.

    PubMed

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621

  7. Evaluation of chlorophyll-a retrieval algorithms based on MERIS bands for optically varying eutrophic inland lakes.

    PubMed

    Lyu, Heng; Li, Xiaojun; Wang, Yannan; Jin, Qi; Cao, Kai; Wang, Qiao; Li, Yunmei

    2015-10-15

    Fourteen field campaigns were conducted in five inland lakes during different seasons between 2006 and 2013, and a total of 398 water samples with varying optical characteristics were collected. The characteristics were analyzed based on remote sensing reflectance, and an automatic cluster two-step method was applied for water classification. The inland waters could be clustered into three types, which we labeled water types I, II and III. From water types I to III, the effect of the phytoplankton on the optical characteristics gradually decreased. Four chlorophyll-a retrieval algorithms for Case II water, a two-band, three-band, four-band and SCI (Synthetic Chlorophyll Index) algorithm were evaluated for three water types based on the MERIS bands. Different MERIS bands were used for the three water types in each of the four algorithms. The four algorithms had different levels of retrieval accuracy for each water type, and no single algorithm could be successfully applied to all water types. For water types I and III, the three-band algorithm performed the best, while the four-band algorithm had the highest retrieval accuracy for water type II. However, the three-band algorithm is preferable to the two-band algorithm for turbid eutrophic inland waters. The SCI algorithm is recommended for highly turbid water with a higher concentration of total suspended solids. Our research indicates that the chlorophyll-a concentration retrieval by remote sensing for optically contrasted inland water requires a specific algorithm that is based on the optical characteristics of inland water bodies to obtain higher estimation accuracy. PMID:26057542

  8. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  9. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida (Published Proceedings)

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  10. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308

  11. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  12. Multiple endocrine neoplasia (MEN) II

    MedlinePlus

    Sipple syndrome; MEN II; Pheochromocytoma - MEN II; Thyroid cancer - pheochromocytoma; Parathyroid cancer - pheochromocytoma ... The cause of MEN II is a defect in a gene called RET. This defect causes many tumors to appear in the same ...

  13. Juno II Launch Vehicle

    NASA Technical Reports Server (NTRS)

    1958-01-01

    The modified Jupiter C (sometimes called Juno I), used to launch Explorer I, had minimum payload lifting capabilities. Explorer I weighed slightly less than 31 pounds. Juno II was part of America's effort to increase payload lifting capabilities. Among other achievements, the vehicle successfully launched a Pioneer IV satellite on March 3, 1959, and an Explorer VII satellite on October 13, 1959. Responsibility for Juno II passed from the Army to the Marshall Space Flight Center when the Center was activated on July 1, 1960. On November 3, 1960, a Juno II sent Explorer VIII into a 1,000-mile deep orbit within the ionosphere.

  14. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  15. Squint mode SAR processing algorithms

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Jin, M.; Curlander, J. C.

    1989-01-01

    The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.

  16. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  17. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  18. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  19. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  20. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, the Joint Polar Satellite System replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by the National Oceanic and Atmospheric Administration and the ground processing component of both Polar-orbiting Operational Environmental Satellites and the Defense Meteorological Satellite Program (DMSP) replacement, previously known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and an Interface Data Processing Segment (IDPS). Both segments are developed by Raytheon Intelligence and Information Systems (IIS). The C3S currently flies the Suomi National Polar Partnership (Suomi NPP) satellite and transfers mission data from Suomi NPP and between the ground facilities. The IDPS processes Suomi NPP satellite data to provide Environmental Data Records (EDRs) to NOAA and DoD processing centers operated by the United States government. When the JPSS-1 satellite is launched in early 2017, the responsibilities of the C3S and the IDPS will be expanded to support both Suomi NPP and JPSS-1. The EDRs for Suomi NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. As Cal/Val proceeds, changes to the

  1. Application of a Multi-Objective Optimization Method to Provide Least Cost Alternatives for NPS Pollution Control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Arabi, Mazdak; Engel, Bernard

    2011-09-01

    Nonpoint source (NPS) pollutants such as phosphorus, nitrogen, sediment, and pesticides are the foremost sources of water contamination in many of the water bodies in the Midwestern agricultural watersheds. This problem is expected to increase in the future with the increasing demand to provide corn as grain or stover for biofuel production. Best management practices (BMPs) have been proven to effectively reduce the NPS pollutant loads from agricultural areas. However, in a watershed with multiple farms and multiple BMPs feasible for implementation, it becomes a daunting task to choose a right combination of BMPs that provide maximum pollution reduction for least implementation costs. Multi-objective algorithms capable of searching from a large number of solutions are required to meet the given watershed management objectives. Genetic algorithms have been the most popular optimization algorithms for the BMP selection and placement. However, previous BMP optimization models did not study pesticide which is very commonly used in corn areas. Also, with corn stover being projected as a viable alternative for biofuel production there might be unintended consequences of the reduced residue in the corn fields on water quality. Therefore, there is a need to study the impact of different levels of residue management in combination with other BMPs at a watershed scale. In this research the following BMPs were selected for placement in the watershed: (a) residue management, (b) filter strips, (c) parallel terraces, (d) contour farming, and (e) tillage. We present a novel method of combing different NPS pollutants into a single objective function, which, along with the net costs, were used as the two objective functions during optimization. In this study we used BMP tool, a database that contains the pollution reduction and cost information of different BMPs under consideration which provides pollutant loads during optimization. The BMP optimization was performed using a NSGA-II

  2. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  3. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  4. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  5. Network II Database

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  6. Factor II deficiency

    MedlinePlus

    ... blood. It leads to problems with blood clotting (coagulation). Factor II is also known as prothrombin. ... blood clots form. This process is called the coagulation cascade. It involves special proteins called coagulation, or ...

  7. MOLA II Laser Transmitter Calibration and Performance. 1.2

    NASA Technical Reports Server (NTRS)

    Afzal, Robert S.; Smith, David E. (Technical Monitor)

    1997-01-01

    The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.

  8. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  9. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  10. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  11. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  12. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  13. Simultaneous stabilization using genetic algorithms

    SciTech Connect

    Benson, R.W.; Schmitendorf, W.E. . Dept. of Mechanical Engineering)

    1991-01-01

    This paper considers the problem of simultaneously stabilizing a set of plants using full state feedback. The problem is converted to a simple optimization problem which is solved by a genetic algorithm. Several examples demonstrate the utility of this method. 14 refs., 8 figs.

  14. Detection Algorithms: FFT vs. KLT

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.

  15. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  16. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  17. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  18. Nuclear models and exact algorithms

    NASA Astrophysics Data System (ADS)

    Bes, D. R.; Dobaczewski, J.; Draayer, J. P.; Szymański, Z.

    1992-07-01

    Discussion Group E on Nuclear Models and Exact Algorithms received contributions from the following individuals: L. Egido, S. Frauendorf, F. Iachello, P. Ring, H. Sagawa, W. Satula, N. C. Schmeing, M. Vincent, A. J. Zucker. The report that follows is an attempt by the leaders of the discussion to summarize the presentations and to give an impression of the subject matter.

  19. SMAP's Radar OBP Algorithm Development

    NASA Technical Reports Server (NTRS)

    Le, Charles; Spencer, Michael W.; Veilleux, Louise; Chan, Samuel; He, Yutao; Zheng, Jason; Nguyen, Kayla

    2009-01-01

    An approach for algorithm specifications and development is described for SMAP's radar onboard processor with multi-stage demodulation and decimation bandpass digital filter. Point target simulation is used to verify and validate the filter design with the usual radar performance parameters. Preliminary FPGA implementation is also discussed.

  20. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  1. Quartic Rotation Criteria and Algorithms.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Jennrich, Robert I.

    1988-01-01

    Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)

  2. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  3. Lattice Boltzmann algorithm for continuum multicomponent flow.

    PubMed

    Halliday, I; Hollis, A P; Care, C M

    2007-08-01

    We present a multicomponent lattice Boltzmann simulation for continuum fluid mechanics, paying particular attention to the component segregation part of the underlying algorithm. In the principal result of this paper, the dynamics of a component index, or phase field, is obtained for a segregation method after U. D'Ortona [Phys. Rev. E 51, 3718 (1995)], due to Latva-Kokko and Rothman [Phys. Rev. E 71 056702 (2005)]. The said dynamics accord with a simulation designed to address multicomponent flow in the continuum approximation and underwrite improved simulation performance in two main ways: (i) by reducing the interfacial microcurrent activity considerably and (ii) by facilitating simulational access to regimes of flow with a low capillary number and drop Reynolds number [I. Halliday, R. Law, C. M. Care, and A. Hollis, Phys. Rev. E 73, 056708 (2006)]. The component segregation method studied, used in conjunction with Lishchuk's method [S. V. Lishchuk, C. M. Care, and I. Halliday, Phys. Rev. E 67, 036701 (2003)], produces an interface, which is distributed in terms of its component index; however, the hydrodynamic boundary conditions which emerge are shown to support the notion of a sharp, unstructured, continuum interface. PMID:17930175

  4. Applying various algorithms for species distribution modelling.

    PubMed

    Li, Xinhai; Wang, Yuan

    2013-06-01

    Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation. PMID:23731809

  5. A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms

    NASA Astrophysics Data System (ADS)

    Kara, M.

    1995-12-01

    Distributed-controlled dynamic load balancing algorithms are known to have several advantages over centralized algorithms such as scalability, and fault tolerance. Distributed implies that the control is decentralized and that a copy of the algorithm (called a scheduler) is replicated on each host of the network. However, distributed control also contributes to the lack of global goals and lack of coherence. This paper presents a new algorithm called DGP (decentralized global plans) that addresses the problem of coherence and co-ordination in distributed dynamic load balancing algorithms. The DGP algorithm is based on a strategy called global plans (GP), and aims at maintaining all computational loads of a distributed system within a band called delta . The rationale for the design of DGP is to allow each scheduler to consider the actions of its peer schedulers. With this level of co-ordination, the schedulers can act more as a coherent team. This new approach first explicitly specifies a global goal and then designs a strategy around this global goal such that each scheduler (i) takes into account local decisions made by other schedulers; (ii) takes into account the effect of its local decisions on the overall system and (iii) ensures load balancing. An experimental evaluation of DGP with two other well known dynamic load balancing algorithms published in the literature shows that DGP performs consistently better. More significantly, the results indicate that the global plan approach provides a better framework for the design of distributed dynamic load balancing algorithms.

  6. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  7. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  8. Multi-Objective Scheduling for the Cluster II Constellation

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Giuliano, Mark

    2011-01-01

    This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.

  9. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  10. Carnitine palmitoyltransferase II deficiency

    PubMed Central

    Roe, C R.; Yang, B-Z; Brunengraber, H; Roe, D S.; Wallace, M; Garritson, B K.

    2008-01-01

    Background: Carnitine palmitoyltransferase II (CPT II) deficiency is an important cause of recurrent rhabdomyolysis in children and adults. Current treatment includes dietary fat restriction, with increased carbohydrate intake and exercise restriction to avoid muscle pain and rhabdomyolysis. Methods: CPT II enzyme assay, DNA mutation analysis, quantitative analysis of acylcarnitines in blood and cultured fibroblasts, urinary organic acids, the standardized 36-item Short-Form Health Status survey (SF-36) version 2, and bioelectric impedance for body fat composition. Diet treatment with triheptanoin at 30% to 35% of total daily caloric intake was used for all patients. Results: Seven patients with CPT II deficiency were studied from 7 to 61 months on the triheptanoin (anaplerotic) diet. Five had previous episodes of rhabdomyolysis requiring hospitalizations and muscle pain on exertion prior to the diet (two younger patients had not had rhabdomyolysis). While on the diet, only two patients experienced mild muscle pain with exercise. During short periods of noncompliance, two patients experienced rhabdomyolysis with exercise. None experienced rhabdomyolysis or hospitalizations while on the diet. All patients returned to normal physical activities including strenuous sports. Exercise restriction was eliminated. Previously abnormal SF-36 physical composite scores returned to normal levels that persisted for the duration of the therapy in all five symptomatic patients. Conclusions: The triheptanoin diet seems to be an effective therapy for adult-onset carnitine palmitoyltransferase II deficiency. GLOSSARY ALT = alanine aminotransferase; AST = aspartate aminotransferase; ATP = adenosine triphosphate; BHP = β-hydroxypentanoate; BKP = β-ketopentanoate; BKP-CoA = β-ketopentanoyl–coenzyme A; BUN = blood urea nitrogen; CAC = citric acid cycle; CoA = coenzyme A; CPK = creatine phosphokinase; CPT II = carnitine palmitoyltransferase II; LDL = low-density lipoprotein; MCT

  11. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  12. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  13. Unsupervised and stable LBG algorithm for data classification: application to aerial multicomponent images

    NASA Astrophysics Data System (ADS)

    Taher, A.; Chehdi, K.; Cariou, C.

    2015-10-01

    In this paper a stable and unsupervised Linde-Buzo-Gray (LBG) algorithm named LBGO is presented. The originality of the proposed algorithm relies: i) on the utilization of an adaptive incremental technique to initialize the class centres that calls into question the intermediate initializations; this technique makes the algorithm stable and deterministic, and the classification results do not vary from a run to another, and ii) on the unsupervised evaluation criteria of the intermediate classification result to estimate the optimal number of classes; this makes the algorithm unsupervised. The efficiency of this optimized version of LBG is shown through some experimental results on synthetic and real aerial hyperspectral data. More precisely we have tested our proposed classification approach regarding three aspects: firstly for its stability, secondly for its correct classification rate, and thirdly for the correct estimation of number of classes.

  14. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  15. Design and implementation of hybrid CORDIC algorithm based on phase rotation estimation for NCO.

    PubMed

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  16. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  17. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  18. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  19. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  20. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  1. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  2. Optimisation algorithms for microarray biclustering.

    PubMed

    Perrin, Dimitri; Duhamel, Christophe

    2013-01-01

    In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756

  3. A possible hypercomputational quantum algorithm

    NASA Astrophysics Data System (ADS)

    Sicard, Andres; Velez, Mario; Ospina, Juan

    2005-05-01

    The term 'hypermachine' denotes any data processing device (theoretical or that can be implemented) capable of carrying out tasks that cannot be performed by a Turing machine. We present a possible quantum algorithm for a classically non-computable decision problem, Hilbert's tenth problem; more specifically, we present a possible hypercomputation model based on quantum computation. Our algorithm is inspired by the one proposed by Tien D. Kieu, but we have selected the infinite square well instead of the (one-dimensional) simple harmonic oscillator as the underlying physical system. Our model exploits the quantum adiabatic process and the characteristics of the representation of the dynamical Lie algebra su(1,1) associated to the infinite square well.

  4. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  5. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  6. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  7. Algorithms of NCG geometrical module

    NASA Astrophysics Data System (ADS)

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-01

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  8. Algorithms of NCG geometrical module

    SciTech Connect

    Gurevich, M. I.; Pryanichnikov, A. V.

    2012-12-15

    The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.

  9. Computed laminography and reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Que, Jie-Min; Cao, Da-Quan; Zhao, Wei; Tang, Xiao; Sun, Cui-Li; Wang, Yan-Fang; Wei, Cun-Feng; Shi, Rong-Jian; Wei, Long; Yu, Zhong-Qiang; Yan, Yong-Lian

    2012-08-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system.

  10. Efficient algorithms for proximity problems

    SciTech Connect

    Wee, Y.C.

    1989-01-01

    Computational geometry is currently a very active area of research in computer science because of its applications to VLSI design, database retrieval, robotics, pattern recognition, etc. The author studies a number of proximity problems which are fundamental in computational geometry. Optimal or improved sequential and parallel algorithms for these problems are presented. Along the way, some relations among the proximity problems are also established. Chapter 2 presents an O(N log{sup 2} N) time divide-and-conquer algorithm for solving the all pairs geographic nearest neighbors problem (GNN) for a set of N sites in the plane under any L{sub p} metric. Chapter 3 presents an O(N log N) divide-and-conquer algorithm for computing the angle restricted Voronoi diagram for a set of N sites in the plane. Chapter 4 introduces a new data structure for the dynamic version of GNN. Chapter 5 defines a new formalism called the quasi-valid range aggregation. This formalism leads to a new and simple method for reducing non-range query-like problems to range queries and often to orthogonal range queries, with immediate applications to the attracted neighbor and the planar all-pairs nearest neighbors problem. Chapter 6 introduces a new approach for the construction of the Voronoi diagram. Using this approach, we design an O(log N) time O (N) processor algorithm for constructing the Voronoi diagram with L{sub 1} and L. metrics on a CREW PRAM machine. Even though the GNN and the Delaunay triangulation (DT) do not have an inclusion relation, we show, using some range type queries, how to efficiently construct DT from the GNN relations over a constant number of angular ranges.

  11. Algorithm Helps Monitor Engine Operation

    NASA Technical Reports Server (NTRS)

    Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.

    1995-01-01

    Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.

  12. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    , are trained and utilized to interpret nonlinear far-field wave patterns. Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.

  13. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-01

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614

  14. Mod II engine performance

    NASA Technical Reports Server (NTRS)

    Richey, Albert E.; Huang, Shyan-Cherng

    1987-01-01

    The testing of a prototype of an automotive Stirling engine, the Mod II, is discussed. The Mod II is a one-piece cast block with a V-4 single-crankshaft configuration and an annular regenerator/cooler design. The initial testing of Mod II concentrated on the basic engine, with auxiliaries driven by power sources external to the engine. The performance of the engine was tested at 720 C set temperature and 820 C tube temperature. At 720 C, it is observed that the power deficiency is speed dependent and linear, with a weak pressure dependency, and at 820 C, the power deficiency is speed and pressure dependent. The effects of buoyancy and nozzle spray pattern on the heater temperature spread are investigated. The characterization of the oil pump and the operating cycle and temperature spread tests are proposed for further evaluation of the engine.

  15. About APPLE II Operation

    SciTech Connect

    Schmidt, T.; Zimoch, D.

    2007-01-19

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180 deg. requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  16. PEP-II Status

    SciTech Connect

    Sullivan, M.; Bertsche, K.; Browne, M.; Cai, Y.; Cheng, W.; Colocho, W.; Decker, F.-J.; Donald, M.; Ecklund, S.; Erickson, R.; Fisher, A.S.; Fox, J.; Heifets, S.; Himel, T.; Iverson, R.; Kulikov, A.; Novokhatski, A.; Pacak, V.; Pivi, M.; Rivetta, C.; Ross, M.; /SLAC /Saclay /Frascati

    2008-07-25

    PEP-II and BaBar have just finished run 7, the last run of the SLAC B-factory. PEP-II was one of the few high-current e+e- colliding accelerators and holds the present world record for stored electrons and stored positrons. It has stored 2.07 A of electrons, nearly 3 times the design current of 0.75 A and it has stored 3.21 A of positrons, 1.5 times more than the design current of 2.14 A. High-current beams require careful design of several systems. The feedback systems that control instabilities, the RF system stability loops, and especially the vacuum systems have to handle the higher power demands. We present here some of the accomplishments of the PEP-II accelerator and some of the problems we encountered while running high-current beams.

  17. About APPLE II Operation

    NASA Astrophysics Data System (ADS)

    Schmidt, T.; Zimoch, D.

    2007-01-01

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180° requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  18. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  19. Algorithm validation using multicolor phantoms.

    PubMed

    Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong

    2012-06-01

    We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077

  20. A novel stochastic optimization algorithm.

    PubMed

    Li, B; Jiang, W

    2000-01-01

    This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742

  1. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  2. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  3. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  4. SAGE II Ozone Analysis

    NASA Technical Reports Server (NTRS)

    Cunnold, Derek; Wang, Ray

    2002-01-01

    Publications from 1999-2002 describing research funded by the SAGE II contract to Dr. Cunnold and Dr. Wang are listed below. Our most recent accomplishments include a detailed analysis of the quality of SAGE II, v6.1, ozone measurements below 20 km altitude (Wang et al., 2002 and Kar et al., 2002) and an analysis of the consistency between SAGE upper stratospheric ozone trends and model predictions with emphasis on hemispheric asymmetry (Li et al., 2001). Abstracts of the 11 papers are attached.

  5. Experiment Tgv II

    NASA Astrophysics Data System (ADS)

    Čermák, P.; Štekl, I.; Beneš, P.; Brudanin, V. B.; Rukhadze, N. I.; Egorov, V. G.; Kovalenko, V. E.; Kovalík, A.; Salamatin, A. V.; Timkin, V. V.; Vylov, Ts.; Briancon, Ch.; Šimkovic, F.

    2004-07-01

    The project aims at the measurement of very rare processes of double-beta decay of 106Cd and 48Ca. The experimental facility TGV II (Telescope Germanium Vertical) makes use of 32 HPGe planar detectors mounted in one common cryostat. The detectors are interleaved with thin foils containing ββ sources. Besides passive shielding against background radiation made of pure copper, lead and boron dopped polyethylene additional techniques for background suppression based on digital pulse shape analysis are used. The experimental setup is located in Modane underground laboratory (France). A review of the TGV II facility, its performance parameters and capabilities are presented.

  6. Palladium (II) Hydrazopyrazolone Complexes

    NASA Astrophysics Data System (ADS)

    El-Maraghy, Salah B.; Salib, K. A.; Stefan, Shaker L.

    1989-12-01

    Palladium (II) complexes with 1-pheny1-3-methy1-4-(arylhydrazo)-5- pyrazolone dyes were studied spectrophotometrically. Pd (II) forms 1:1 and 1:2 complexes with the ligands by the replacement of their phenolic and hydrazo protons. The ligands behave as tridentate in the 1:1 complex and as bidentate in the 1:2 complex. The sability constants of these complexes are dependent on the type of substituents in the benzene ring of the arylazo moiety.

  7. 3DRISM Multigrid Algorithm for Fast Solvation Free Energy Calculations.

    PubMed

    Sergiievskyi, Volodymyr P; Fedorov, Maxim V

    2012-06-12

    In this paper we present a fast and accurate method for modeling solvation properties of organic molecules in water with a main focus on predicting solvation (hydration) free energies of small organic compounds. The method is based on a combination of (i) a molecular theory, three-dimensional reference interaction sites model (3DRISM); (ii) a fast multigrid algorithm for solving the high-dimensional 3DRISM integral equations; and (iii) a recently introduced universal correction (UC) for the 3DRISM solvation free energies by properly scaled molecular partial volume (3DRISM-UC, Palmer et al., J. Phys.: Condens. Matter2010, 22, 492101). A fast multigrid algorithm is the core of the method because it helps to reduce the high computational costs associated with solving the 3DRISM equations. To facilitate future applications of the method, we performed benchmarking of the algorithm on a set of several model solutes in order to find optimal grid parameters and to test the performance and accuracy of the algorithm. We have shown that the proposed new multigrid algorithm is on average 24 times faster than the simple Picard method and at least 3.5 times faster than the MDIIS method which is currently actively used by the 3DRISM community (e.g., the MDIIS method has been recently implemented in a new 3DRISM implicit solvent routine in the recent release of the AmberTools 1.4 molecular modeling package (Luchko et al. J. Chem. Theory Comput. 2010, 6, 607-624). Then we have benchmarked the multigrid algorithm with chosen optimal parameters on a set of 99 organic compounds. We show that average computational time required for one 3DRISM calculation is 3.5 min per a small organic molecule (10-20 atoms) on a standard personal computer. We also benchmarked predicted solvation free energy values for all of the compounds in the set against the corresponding experimental data. We show that by using the proposed multigrid algorithm and the 3DRISM-UC model, it is possible to obtain good

  8. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  9. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  10. Algorithmic formulation of control problems in manipulation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1975-01-01

    The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.

  11. Efficient Algorithm for Rectangular Spiral Search

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.

  12. Algorithm for parametric community detection in networks.

    PubMed

    Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo

    2012-07-01

    Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is

  13. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  14. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    SciTech Connect

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  15. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  16. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  17. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  18. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  19. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  20. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  1. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  2. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  3. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  4. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  5. Comparison of Evolutionary Multiobjective Algorithms For Calibrating An Integrated Semi-distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Reed, P.; Wagner, T.

    2005-12-01

    This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.

  6. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  7. GOSAT BESD XCO2 for MACC-II: Current Status

    NASA Astrophysics Data System (ADS)

    Heymann, Jens; Reuter, Maximilian; Hilker, Michael; Buchwitz, Michael; Schneising, Oliver; Bovensmann, Heinrich; Burrows, John P.

    2014-05-01

    Carbon dioxide (CO2) is the most important anthropogenic greenhouse gas contributing to global warming. Near-surface sensitive measurements from satellite instruments such as SCIAMACHY on-board ENVISAT and TANSO on-board GOSAT can provide important missing global information on the regional sources and sinks of CO2. However, this requires to meet challenging accuracy requirements. An algorithm to retrieve the column-averaged dry air mole fraction of CO2 ("XCO2") from satellite measurements is the Bremen Optimal Estimation DOAS (BESD) retrieval algorithm. BESD was originally developed to retrieve XCO2 from SCIAMACHY measurements. In the framework of the MACC-II project, the SCIAMACHY BESD XCO2 product was delivered for MACC-II for delayed mode production and monitoring by University of Bremen. After the loss of ENVISAT in April 2012, it was decided that University of Bremen shall deliver GOSAT XCO2 instead of SCIAMACHY XCO2. To achieve this, the BESD algorithm has been modified. Consistency of long-term XCO2 products derived from different satellites is important for climate applications and using the same algorithm contributes to minimize inconsistencies. Here, we present results from these activities.

  8. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Reed, P.; Wagener, T.

    2005-11-01

    This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated model application in the Shale Hills watershed in Pennsylvania. A challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 is an excellent benchmark algorithm for multiobjective hydrologic model calibration. SPEA2 attained competitive to superior results for most of the problems tested in this study. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration.

  9. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  10. Pricing Resources in LTE Networks through Multiobjective Optimization

    PubMed Central

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  11. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  12. Periodontics II: Course Proposal.

    ERIC Educational Resources Information Center

    Dordick, Bruce

    A proposal is presented for Periodontics II, a course offered at the Community College of Philadelphia to give the dental hygiene/assisting student an understanding of the disease states of the periodontium and their treatment. A standardized course proposal cover form is given, followed by a statement of purpose for the course, a list of major…

  13. Instant Insanity II

    ERIC Educational Resources Information Center

    Richmond, Tom; Young, Aaron

    2013-01-01

    "Instant Insanity II" is a sliding mechanical puzzle whose solution requires the special alignment of 16 colored tiles. We count the number of solutions of the puzzle's classic challenge and show that the more difficult ultimate challenge has, up to row permutation, exactly two solutions, and further show that no…

  14. Listen & Learn II.

    ERIC Educational Resources Information Center

    Community Building Resources, Spruce Grove (Alberta).

    Six community builders in Edmonton, Alberta, planned, developed, and implemented Listen and Learn II, a reflective research project in asset-based community building, over a 6-month period in 1998. They met regularly over 2 months to plan the research and design a method that was open to participation at any stage, encouraged exchange of…

  15. Dissecting Diversity Part II

    ERIC Educational Resources Information Center

    Matthews, Frank

    2005-01-01

    This article presents "Dissecting Diversity, Part II," the conclusion of a wide-ranging two-part roundtable discussion on diversity in higher education. The participants were as follows: Lezli Baskerville, J.D., President and CEO of the National Association for Equal Opportunity (NAFEO); Dr. Gerald E. Gipp, Executive Director of the American…

  16. Padé approximations for Painlevé I and II transcendents

    NASA Astrophysics Data System (ADS)

    Novokshenov, V. Yu.

    2009-06-01

    We use a version of the Fair-Luke algorithm to find the Padé approximate solutions of the Painlevé I and II equations. We find the distributions of poles for the well-known Ablowitz-Segur and Hastings-McLeod solutions of the Painlevé II equation. We show that the Boutroux tritronquée solution of the Painleé I equation has poles only in the critical sector of the complex plane. The algorithm allows checking other analytic properties of the Painlevé transcendents, such as the asymptotic behavior at infinity in the complex plane.

  17. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  18. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  19. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  20. Efficient demultiplexing algorithm for noncontiguous carriers

    NASA Technical Reports Server (NTRS)

    Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.

    1992-01-01

    A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.

  1. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  2. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  3. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  4. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  5. Newman-Janis Algorithm Revisited

    NASA Astrophysics Data System (ADS)

    Brauer, O.; Camargo, H. A.; Socolovsky, M.

    2015-01-01

    The purpose of the present article is to show that the Newman-Janis and Newman et al algorithm used to derive the Kerr and Kerr-Newman metrics respectively, automatically leads to the extension of the initial non negative polar radial coordinate r to a cartesian coordinate running from to , thus introducing in a natural way the region in the above spacetimes. Using Boyer-Lindquist and ellipsoidal coordinates, we discuss some geometrical aspects of the positive and negative regions of , like horizons, ergosurfaces, and foliation structures

  6. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  7. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  8. Type-II Fuzzy Decision Support System for Fertilizer

    PubMed Central

    Ashraf, Ather; Sarwar, Mansoor

    2014-01-01

    Type-II fuzzy sets are used to convey the uncertainties in the membership function of type-I fuzzy sets. Linguistic information in expert rules does not give any information about the geometry of the membership functions. These membership functions are mostly constructed through numerical data or range of classes. But there exists an uncertainty about the shape of the membership, that is, whether to go for a triangle membership function or a trapezoidal membership function. In this paper we use a type-II fuzzy set to overcome this uncertainty, and develop a fuzzy decision support system of fertilizers based on a type-II fuzzy set. This type-II fuzzy system takes cropping time and soil nutrients in the form of spatial surfaces as input, fuzzifies it using a type-II fuzzy membership function, and implies fuzzy rules on it in the fuzzy inference engine. The output of the fuzzy inference engine, which is in the form of interval value type-II fuzzy sets, reduced to an interval type-I fuzzy set, defuzzifies it to a crisp value and generates a spatial surface of fertilizers. This spatial surface shows the spatial trend of the required amount of fertilizer needed to cultivate a specific crop. The complexity of our algorithm is O(mnr), where m is the height of the raster, n is the width of the raster, and r is the number of expert rules. PMID:24892071

  9. EMGeo-II

    SciTech Connect

    Newman, Gregory; Commer, Michael

    2009-01-01

    An algorithm that improves both the computational capabilities of joint 3D electromagnetic EM and magnetotelluric MT field simulation and inverse modeling. Based upon non-linear conjugate gradients for the imaging component and 3D finite difference methodology for field EM & MT simulation. Improving the modeling efficiency of the algorithm involves the separation of the modeling/imaging grid from the simulation grid. This grid separation method allows for the treatment of very large data sets and imaging volumes. Further computational efficiency is obtained by combining different levels of parallelization using the message Passing Interface (MPI). Bound constraints are employed in the imaging process to insure stability. Additional acceleration of the inverse modeling is achieved by preconditions the conjugate gradient optimizer using an approximate Hessian. The algorithm includes improved capabilities to accurately treat models that exhibit transverse anisotropy in electrical conductivity in the presence of topography and bathymetry. Background anisotropic Earth models are assigned to each transmitter-receiver set, results in solutions to the scattering equations at much improved accuracy. The software also incudes a set of pre and post processing tools to designing input model meshes and data plotting.

  10. EMGeo-II

    2009-01-01

    An algorithm that improves both the computational capabilities of joint 3D electromagnetic EM and magnetotelluric MT field simulation and inverse modeling. Based upon non-linear conjugate gradients for the imaging component and 3D finite difference methodology for field EM & MT simulation. Improving the modeling efficiency of the algorithm involves the separation of the modeling/imaging grid from the simulation grid. This grid separation method allows for the treatment of very large data sets and imaging volumes.more » Further computational efficiency is obtained by combining different levels of parallelization using the message Passing Interface (MPI). Bound constraints are employed in the imaging process to insure stability. Additional acceleration of the inverse modeling is achieved by preconditions the conjugate gradient optimizer using an approximate Hessian. The algorithm includes improved capabilities to accurately treat models that exhibit transverse anisotropy in electrical conductivity in the presence of topography and bathymetry. Background anisotropic Earth models are assigned to each transmitter-receiver set, results in solutions to the scattering equations at much improved accuracy. The software also incudes a set of pre and post processing tools to designing input model meshes and data plotting.« less

  11. Ordered subsets algorithms for transmission tomography.

    PubMed

    Erdogan, H; Fessler, J A

    1999-11-01

    The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288

  12. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  13. Role of Bound Zn(II) in the CadC Cd(II)/Pb(II)/Zn(II)-Responsive Repressor

    SciTech Connect

    Kandegedara, A.; Thiyagarajan, S; Kondapalli, K; Stemmler, T; Rosen, B

    2009-01-01

    The Staphylococcus aureus plasmid pI258 cadCA operon encodes a P-type ATPase, CadA, that confers resistance to Cd(II)/Pb(II)/Zn(II). Expression is regulated by CadC, a homodimeric repressor that dissociates from the cad operator/promoter upon binding of Cd(II), Pb(II), or Zn(II). CadC is a member of the ArsR/SmtB family of metalloregulatory proteins. The crystal structure of CadC shows two types of metal binding sites, termed Site 1 and Site 2, and the homodimer has two of each. Site 1 is the physiological inducer binding site. The two Site 2 metal binding sites are formed at the dimerization interface. Site 2 is not regulatory in CadC but is regulatory in the homologue SmtB. Here the role of each site was investigated by mutagenesis. Both sites bind either Cd(II) or Zn(II). However, Site 1 has higher affinity for Cd(II) over Zn(II), and Site 2 prefers Zn(II) over Cd(II). Site 2 is not required for either derepression or dimerization. The crystal structure of the wild type with bound Zn(II) and of a mutant lacking Site 2 was compared with the SmtB structure with and without bound Zn(II). We propose that an arginine residue allows for Zn(II) regulation in SmtB and, conversely, a glycine results in a lack of regulation by Zn(II) in CadC. We propose that a glycine residue was ancestral whether the repressor binds Zn(II) at a Site 2 like CadC or has no Site 2 like the paralogous ArsR and implies that acquisition of regulatory ability in SmtB was a more recent evolutionary event.

  14. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  15. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  16. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  17. Search properties of some sequential decoding algorithms.

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1973-01-01

    Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.

  18. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  19. MIRA: mutual information-based reporter algorithm for metabolic networks

    PubMed Central

    Cicek, A. Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin

    2014-01-01

    Motivation: Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA’s results are biologically sound, empirically significant and more reliable than RA. Results: We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. Availability and

  20. Homology modeling, binding site identification and docking study of human angiotensin II type I (Ang II-AT1) receptor.

    PubMed

    Vyas, Vivek K; Ghate, Manjunath; Patel, Kinjal; Qureshi, Gulamnizami; Shah, Surmil

    2015-08-01

    Ang II-AT1 receptors play an important role in mediating virtually all of the physiological actions of Ang II. Several drugs (SARTANs) are available, which can block the AT1 receptor effectively and lower the blood pressure in the patients with hypertension. Currently, there is no experimental Ang II-AT1 structure available; therefore, in this study we modeled Ang II-AT1 receptor structure using homology modeling followed by identification and characterization of binding sites and thereby assessing druggability of the receptor. Homology models were constructed using MODELLER and I-TASSER server, refined and validated using PROCHECK in which 96.9% of 318 residues were present in the favoured regions of the Ramachandran plots. Various Ang II-AT1 receptor antagonist drugs are available in the market as antihypertensive drug, so we have performed docking study with the binding site prediction algorithms to predict different binding pockets on the modeled proteins. The identification of 3D structures and binding sites for various known drugs will guide us for the structure-based drug design of novel compounds as Ang II-AT1 receptor antagonists for the treatment of hypertension. PMID:26349961

  1. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  2. Region processing algorithm for HSTAMIDS

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, Dominic K. C.

    2006-05-01

    The AN/PSS-14 (a.k.a. HSTAMIDS) has been tested for its performance in South East Asia, Thailand), South Africa (Namibia) and in November of 2005 in South West Asia (Afghanistan). The system has been proven effective in manual demining particularly in discriminating indigenous, metallic artifacts in the minefields. The Humanitarian Demining Research and Development (HD R&D) Program has sought to further improve the system to address specific needs in several areas. One particular area of these improvement efforts is the development of a mine detection/discrimination improvement software algorithm called Region Processing (RP). RP is an innovative technique in processing and is designed to work on a set of data acquired in a unique sweep pattern over a region-of-interest (ROI). The RP team is a joint effort consisting of three universities (University of Florida, University of Missouri, and Duke University), but is currently being led by the University of Florida. This paper describes the state-of-the-art Region Processing algorithm, its implementation into the current HSTAMIDS system, and its most recent test results.

  3. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  4. Digital Shaping Algorithms for GODDESS

    NASA Astrophysics Data System (ADS)

    Lonsdale, Sarah-Jane; Cizewski, Jolie; Ratkiewicz, Andrew; Pain, Steven

    2014-09-01

    Gammasphere-ORRUBA: Dual Detectors for Experimental Structure Studies (GODDESS) combines the highly segmented position-sensitive silicon strip detectors of ORRUBA with up to 110 Compton-suppressed HPGe detectors from Gammasphere, for high resolution for particle-gamma coincidence measurements. The signals from the silicon strip detectors have position-dependent rise times, and require different forms of pulse shaping for optimal position and energy resolutions. Traditionally, a compromise was achieved with a single shaping of the signals performed by conventional analog electronics. However, there are benefits to using digital acquisition of the detector signals, including the ability to apply multiple custom shaping algorithms to the same signal, each optimized for position and energy, in addition to providing a flexible triggering system, and a reduction in rate-limitation due to pile-up. Recent developments toward creating digital signal processing algorithms for GODDESS will be discussed. This work is supported in part by the U.S. D.O.E. and N.S.F.

  5. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  6. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  7. PEP-II Transverse Feedback Electronics Upgrade

    SciTech Connect

    Weber, J.; Chin, M.; Doolittle, L.; Akre, R.

    2005-05-09

    The PEP-II B Factory at the Stanford Linear Accelerator Center (SLAC) requires an upgrade of the transverse feedback system electronics. The new electronics require 12-bit resolution and a minimum sampling rate of 238 Msps. A Field Programmable Gate Array (FPGA) is used to implement the feedback algorithm. The FPGA also contains an embedded PowerPC 405 (PPC-405) processor to run control system interface software for data retrieval, diagnostics, and system monitoring. The design of this system is based on the Xilinx(R) ML300 Development Platform, a circuit board set containing an FPGA with an embedded processor, a large memory bank, and other peripherals. This paper discusses the design of a digital feedback system based on an FPGA with an embedded processor. Discussion will include specifications, component selection, and integration with the ML300 design.

  8. PEP-II Transverse Feedback Electronics Upgrade

    SciTech Connect

    Weber, J.M.; Chin, M.J.; Doolittle, L.R.; Akre, R.; /SLAC

    2006-03-13

    The PEP-II B Factory at the Stanford Linear Accelerator Center (SLAC) requires an upgrade of the transverse feedback system electronics. The new electronics require 12-bit resolution and a minimum sampling rate of 238 Msps. A Field Programmable Gate Array (FPGA) is used to implement the feedback algorithm. The FPGA also contains an embedded PowerPC 405 (PPC-405) processor to run control system interface software for data retrieval, diagnostics, and system monitoring. The design of this system is based on the Xilinx{reg_sign} ML300 Development Platform, a circuit board set containing an FPGA with an embedded processor, a large memory bank, and other peripherals. This paper discusses the design of a digital feedback system based on an FPGA with an embedded processor. Discussion will include specifications, component selection, and integration with the ML300 design.

  9. TARN II project

    SciTech Connect

    Katayama, T.

    1985-04-01

    On the basis of the achievement of the accelerator studies at present TARN, it is decided to construct the new ring TARN II which will be operated as an accumulator, accelerator, cooler and stretcher. It has the maximum magnetic rigidity of 7 Txm corresponding to the proton energy 1.3 GeV and the ring diameter is around 23 m. Light and heavy ions from the SF cyclotron will be injected and accelerated to the working energy where the ring will be operated as a desired mode, for example a cooler ring mode. At the cooler ring operation, the strong cooling devices such as stochastic and electron beam coolings will work together with the internal gas jet target for the precise nuclear experiments. TARN II is currently under the contruction with the schedule of completion in 1986. In this paper general features of the project are presented.

  10. Results from SAGE II

    SciTech Connect

    Nico, J.S.

    1994-10-01

    The Russian-American Gallium solar neutrino Experiment (SAGE) began the second phase of operation (SAGE II) in September of 1992. Monthly measurements of the integral flux of solar neutrinos have been made with 55 tonnes of gallium. The K-peak results of the first nine runs of SAGE II give a capture rate of 66{sub -13}{sup +18} (stat) {sub -7}{sup +5} (sys) SNU. Combined with the SAGE I result of 73{sub -16}{sup +18} (stat) {sub -7}{sup 5} (sys) SNU, the capture rate is 69{sub -11}{sup +11} (stat) {sub -7}{sup +5} (sys) SNU. This represents only 52%--56% of the capture rate predicted by different Standard Solar Models.

  11. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  12. RADTRAN II user guide

    SciTech Connect

    Madsen, M M; Wilmot, E L; Taylor, J M

    1983-02-01

    RADTRAN II is a flexible analytical tool for calculating both the incident-free and accident impacts of transporting radioactive materials. The consequences from incident-free shipments are apportioned among eight population subgroups and can be calculated for several transport modes. The radiological accident risk (probability times consequence summed over all postulated accidents) is calculated in terms of early fatalities, early morbidities, latent cancer fatalities, genetic effects, and economic impacts. Groundshine, inhalation, direct exposure, resuspension, and cloudshine dose pathways are modeled to calculate the radiological health risks from accidents. Economic impacts are evaluated based on costs for emergency response, cleanup, evacuation, income loss, and land use. RADTRAN II can be applied to specific scenario evaluations (individual transport modes or specified combinations), to compare alternative modes or to evaluate generic radioactive material shipments. Unit-risk factors can easily be evaluated to aid in performing generic analyses when several options must be compared with the amount of travel as the only variable.

  13. Introducing CAML II

    SciTech Connect

    Pelaia II, Tom; Boyes, Matthew

    2009-01-01

    Channel Access Markup Language (CAML) is a XML based markup language and implementation for displaying EPICS channel access controls within a web browser. The CAML II project expanded upon the work of CAML I adding more features and greater integration with other web technologies. The most dramatic new feature introduced in CAML II is the introduction of a namespace so CAML controls can be embedded within XHTML documents. A repetition template with macro substitution allows for rapid coding of arbitrary XHTML repetitions. Enhancements have been made to several controls including more powerful plotting options. Advanced formatting options were introduced for text controls. Virtual process variables allow for custom calculations. An EDL to CAML translator eases the transition from EDM screens to CAML pages.

  14. RISTA II trials

    NASA Astrophysics Data System (ADS)

    Martin, John R.

    1998-11-01

    Northrop Grumman Corporation has developed an advanced 2nd generation IR sensor system under the guidance of the US Army's Night Vision and Electronic Sensors Directorate (NVESD) as part of an Advanced Concept Technology Demonstration (ACTD) called Counter Mobile Rocket Launcher (CMRL). Designed to support rapid counter fire against mobile targets from an unmanned aerial vehicle (UAV), the sensor system, called reconnaissance IR surveillance target acquisition (RISTA II), consists of a 2nd generation FLIR/line scanner, a digital data link, a ground processing facility, and an aided target recognizer (AiTF). The concept of operation together with component details was reported at the passive sensors IRIS in March, 1996. The performance testing of the RISTA II System was reported at the National IRIS in November, 1997. The RISTA II sensor has subsequently undergone performance testing on a Royal Netherlands Air Force F-16 for a manned reconnaissance application in August and October, 1997, at Volkel Airbase, Netherlands. That testing showed performance compatible with the medium altitude IR sensor performance. The results of that testing, together with flight test imagery, will be presented.

  15. What is LAMPF II

    SciTech Connect

    Thiessen, H.A.

    1982-08-01

    The present conception of LAMPF II is a high-intensity 16-GeV synchrotron injected by the LAMPF 800-MeV H/sup -/ beam. The proton beam will be used to make secondary beams of neutrinos, muons, pions, kaons, antiprotons, and hyperons more intense than those of any existing or proposed accelerator. For example, by taking maximum advantage of a thick target, modern beam optics, and the LAMPF II proton beam, it will be possible to make a negative muon beam with nearly 100% duty factor and nearly 100 times the flux of the existing Stopped Muon Channel (SMC). Because the unique features of the proposed machine are most applicable to beams of the same momentum as LAMPF (that is, < 2 GeV/c), it may be possible to use most of the experimental areas and some of the auxiliary equipment, including spectrometers, with the new accelerator. The complete facility will provide improved technology for many areas of physics already available at LAMPF and will allow expansion of medium-energy physics to include kaons, antiprotons, and hyperons. When LAMPF II comes on line in 1990 LAMPF will have been operational for 18 years and a major upgrade such as this proposal will be reasonable and prudent.

  16. [Neonatal mucolipidosis type II].

    PubMed

    Hmami, F; Oulmaati, A; Bouharrou, A

    2016-01-01

    Mucolipidosis type II (ML II, OMIM 252,500) is an autosomal recessive disorder clinically characterized by facial dysmorphia similar to Hurler syndrome and pronounced gingival hypertrophy. The disorder is caused by a defect in targeting acid hydrolases on the surface of lysosomes, which impede their entry and lead to accumulation of undigested substrates in lysosomes. The onset of the symptoms is usually in infancy, beginning in the 6th month of life. Early onset, at birth or even in utero, is a sign of severity and involves the specific dysmorphia as well as skeletal dysplasia related to hyperparathyroidism. We report on a severe neonatal form of this disorder revealed by respiratory distress with severe chest deformity. The dysmorphic syndrome, combining coarse features, pronounced gingival hypertrophy, with diffuse bone demineralization and secondary hyperparathyroidism associating significant elevation of parathyroid hormone and alkaline phosphatase with normal levels of vitamin D and calcium were characteristics of mucolipidosis type II. Recognizing this specific association of anomalies helps eliminate the differential diagnosis and establish appropriate diagnosis and care. PMID:26552632

  17. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  18. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  19. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  20. A Probabilistic Cell Tracking Algorithm

    NASA Astrophysics Data System (ADS)

    Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah

    2013-04-01

    The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm